THIS NOTEBOOK DOES NOT USE OUR RASP-based MODEL¶

Notebook description - Predicting details of errors made by the Wu et al 2023 baseline Encoder-Decoder Transformer on the ReCOGS_pos obj_pp_to_subj_pp split¶

This notebook: https://colab.research.google.com/drive/1Z0_EXV-bvmO2mRcnHmDpFuHIv4KHOpz-?usp=sharing (supplemental material for https://github.com/willy-b/RASP-for-ReCOGS paper)

Related notebooks:

Replaces https://colab.research.google.com/drive/1MiiEAchmaGulsTwNHs98-ill-UM7TK3i .

For n=20 replications of overall accuracy of Wu et al 2023 baseline Encoder-Decoder Transformer on ReCOGS_pos see https://colab.research.google.com/drive/12mXX5L1I4rpwl1Jk8hCm-xyAkqiKJEo7#scrollTo=x2F3ZAY7uZjb , that is where our reported baseline scores come from, here I re-run 10 runs to get the specific errors made for detailed error prediction/analysis (also contains baseline layer variation experiments, n=3 and n=4 Transformer block comparison)

See also https://colab.research.google.com/drive/1SOdNcVb4lfbeJTFfxs4HnFOf2GY_D-l4 for a test of the Wu et al 2023 baseline Encoder-Decoder Transformer on a new split we introduce "v_dat_p2_pp_moved_to_recipient" which was predicted by our RASP model to be as difficult as obj_pp_to_subj_pp (the previously reported hardest generalization split) despite not involving subjects or nouns left of verb (same mechanism under our hypothesis but involves transferring prepositional phrase in existing examples from theme to recipient noun both right of verb as a totally independent check).

Hypothesis: flat / non-tree pattern recognizer cannot ignore "pp np" inserted between related words and mistakes the now-closer prepositional noun for whatever relationship the word on the left had¶

Hypothesis: The baseline Encoder-Decoder is doing flat (non-tree, non-recursive) pattern matching, and the cause of the extremely poor obj_pp_to_subj_pp generalization performance is that prepositional phrases add to the right, so when the subject is modified, in English given Subject-Verb-Object order, it is usually on the left of the verb it has a relationship with. When a noun on the left is modified by a preposition, it inserts a prepositional noun phrase to the right of it, before the verb it is related to, so the closest noun on the left of that verb has changed. When an object is modified by a prepositional phrase, it is usually on the right of the verb, and the prepositional phrase is added to the right of the noun, NOT in between the two related words and so does not affect the pattern recognizer looking for the verb and closest nouns.

Side note, this is not specific to objects and subject pp¶

This is also hypothesized to not be specific to subjects and objects and in a separate notebook reported in the paper we predict (confirmed) a new hardest split "np v_dat_p2 np np" where the recipient np in the middle is modified by a prepositional phrase, which has the same problem though it is a totally different grammar form (transferring pp from theme noun to recipient noun).

Think in the limit the difference between np v_dat_p2 np np style sentences where the recipient is modified vs the theme with some long filler content:

"Emma gave a friend a cookie" -> "Emma gave a friend a cookie (filler filler filler)" (not going to miss that a cookie was given as more filler is added)

vs

"Emma gave a friend (filler filler filler) a cookie" (as filler gets larger and has more distractors, may miss that a cookie was given to the friend due to the distance between friend and cookie increasing and distractors including nouns being added in between).

Illustration¶

possible_issue_with_subj_pp_generalization_by_transformers_could_be_simple_nontree_pp_np_distractor_when_modifying_nps_with_related_nps_to_right(1).png

(stats at bottom of figure above are from https://colab.research.google.com/drive/1SOdNcVb4lfbeJTFfxs4HnFOf2GY_D-l4 )

Training N Wu et al 2023 baseline Encoder-Decoder Transformers (using their official training scripts) and evaluating on the obj_pp_to_subj_pp generalization split (using the official script) and collecting the errors made (to explain them)¶

Retrieve the Wu et al 2023 baseline Encoder-Decoder Transformer code

In [ ]:
!pip install transformers==v4.45.2 # there is a breaking change for Wu et al 2023 in upstream huggingface Transformers after this version (see https://github.com/frankaging/ReCOGS/issues/1 )
Collecting transformers==v4.45.2
  Downloading transformers-4.45.2-py3-none-any.whl.metadata (44 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/44.4 kB ? eta -:--:--
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 44.4/44.4 kB 3.8 MB/s eta 0:00:00
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from transformers==v4.45.2) (3.16.1)
Requirement already satisfied: huggingface-hub<1.0,>=0.23.2 in /usr/local/lib/python3.10/dist-packages (from transformers==v4.45.2) (0.26.5)
Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from transformers==v4.45.2) (1.26.4)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from transformers==v4.45.2) (24.2)
Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/dist-packages (from transformers==v4.45.2) (6.0.2)
Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.10/dist-packages (from transformers==v4.45.2) (2024.9.11)
Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from transformers==v4.45.2) (2.32.3)
Requirement already satisfied: safetensors>=0.4.1 in /usr/local/lib/python3.10/dist-packages (from transformers==v4.45.2) (0.4.5)
Requirement already satisfied: tokenizers<0.21,>=0.20 in /usr/local/lib/python3.10/dist-packages (from transformers==v4.45.2) (0.20.3)
Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.10/dist-packages (from transformers==v4.45.2) (4.66.6)
Requirement already satisfied: fsspec>=2023.5.0 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.23.2->transformers==v4.45.2) (2024.10.0)
Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.23.2->transformers==v4.45.2) (4.12.2)
Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==v4.45.2) (3.4.0)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==v4.45.2) (3.10)
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==v4.45.2) (2.2.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==v4.45.2) (2024.8.30)
Downloading transformers-4.45.2-py3-none-any.whl (9.9 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.9/9.9 MB 113.4 MB/s eta 0:00:00
Installing collected packages: transformers
  Attempting uninstall: transformers
    Found existing installation: transformers 4.46.3
    Uninstalling transformers-4.46.3:
      Successfully uninstalled transformers-4.46.3
Successfully installed transformers-4.45.2
In [ ]:
%cd /content/
!rm -rf ReCOGS
!git clone https://github.com/frankaging/ReCOGS.git
%cd ReCOGS
/content
Cloning into 'ReCOGS'...
remote: Enumerating objects: 436, done.
remote: Counting objects: 100% (124/124), done.
remote: Compressing objects: 100% (51/51), done.
remote: Total 436 (delta 96), reused 92 (delta 73), pack-reused 312 (from 1)
Receiving objects: 100% (436/436), 84.71 MiB | 16.06 MiB/s, done.
Resolving deltas: 100% (303/303), done.
Updating files: 100% (137/137), done.
/content/ReCOGS

Modify their script to log the errors for analysis, in run_cogs:

logging.basicConfig(filename="wu_et_al_2023_recogs_baseline_for_error_analysis.log")

and in do_gen condition, at https://github.com/frankaging/ReCOGS/blob/1b6eca8ff4dca5fd2fb284a7d470998af5083beb/run_cogs.py#L384 in the not eq condition (may add in all condition since saving the expected and actual columns then can confirm all examples get logged) add logging.info(f"Mistake (category {cat}): '{decoded_preds[i]}', Expected: '{decoded_labels[i]}', input: {input_labels[i]}")

(was focused on getting it to run after distraction with upstream)

Run 1 seed at a time and collect the errors

In [ ]:
# baseline Wu et al 2023 model and baseline data
!python run_cogs.py --model_name ende_transformer --use_iiem --gpu 1 --train_batch_size 128 --eval_batch_size 128 --lr 0.0001 --data_path ./recogs_positional_index --output_dir ./results_recogs_positional_index_control --lfs cogs --do_train --do_test --do_gen --max_seq_len 512 --output_json --epochs 300 --seeds "42"
EncoderDecoderModel has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
  - If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
  - If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
  - If you are not the owner of the model architecture class, please contact the model code owner to update it.
Epoch: 0:   0% 0/213 [00:00<?, ?it/s]We strongly recommend passing in an `attention_mask` since your input_ids may be padded. See https://huggingface.co/docs/transformers/troubleshooting#incorrect-output-when-padding-tokens-arent-masked.
/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 0: 100% 213/213 [00:11<00:00, 18.58it/s, loss=5.89]
Epoch: 1:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 1: 100% 213/213 [00:10<00:00, 20.27it/s, loss=4.54]
Epoch: 2:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 2: 100% 213/213 [00:10<00:00, 20.28it/s, loss=3.54]
Epoch: 3:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 3: 100% 213/213 [00:10<00:00, 20.28it/s, loss=2.49]
Epoch: 4:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 4: 100% 213/213 [00:10<00:00, 20.25it/s, loss=1.91]
Epoch: 5:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 5: 100% 213/213 [00:10<00:00, 20.33it/s, loss=1.58]
Epoch: 6:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 6: 100% 213/213 [00:10<00:00, 20.33it/s, loss=1.3]
Epoch: 7:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 7: 100% 213/213 [00:10<00:00, 20.21it/s, loss=1.09]
Epoch: 8:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 8: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0.97]
Epoch: 9:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 9: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0.87]
Epoch: 10:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 10: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0.77]
Epoch: 11:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 11: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0.68]
Epoch: 12:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 12: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0.61]
Epoch: 13:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 13: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0.54]
Epoch: 14:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 14: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0.47]
Epoch: 15:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 15: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.4]
Epoch: 16:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 16: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0.34]
Epoch: 17:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 17: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.29]
Epoch: 18:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 18: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0.25]
Epoch: 19:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 19: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0.22]
Epoch: 20:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 20: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0.19]
Epoch: 21:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 21: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0.17]
Epoch: 22:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 22: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0.16]
Epoch: 23:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 23: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0.14]
Epoch: 24:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 24: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0.12]
Epoch: 25:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 25: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0.12]
Epoch: 26:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 26: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0.11]
Epoch: 27:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 27: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0.1]
Epoch: 28:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 28: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0.1]
Epoch: 29:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 29: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0.08]
Epoch: 30:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 30: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0.08]
Epoch: 31:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 31: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0.07]
Epoch: 32:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 32: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0.07]
Epoch: 33:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 33: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0.06]
Epoch: 34:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 34: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0.05]
Epoch: 35:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 35: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0.05]
Epoch: 36:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 36: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0.05]
Epoch: 37:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 37: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0.04]
Epoch: 38:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 38: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0.04]
Epoch: 39:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 39: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0.04]
Epoch: 40:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 40: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0.04]
Epoch: 41:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 41: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0.03]
Epoch: 42:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 42: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0.03]
Epoch: 43:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 43: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0.03]
Epoch: 44:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 44: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.04]
Epoch: 45:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 45: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0.02]
Epoch: 46:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 46: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0.02]
Epoch: 47:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 47: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0.02]
Epoch: 48:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 48: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0.02]
Epoch: 49:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 49: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0.02]
Epoch: 50:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 50: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0.02]
Epoch: 51:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 51: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0.02]
Epoch: 52:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 52: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0.01]
Epoch: 53:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 53: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0.01]
Epoch: 54:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 54: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0.02]
Epoch: 55:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 55: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0.02]
Epoch: 56:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 56: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0.02]
Epoch: 57:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 57: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0.01]
Epoch: 58:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 58: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0.01]
Epoch: 59:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 59: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0.01]
Epoch: 60:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 60: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0.01]
Epoch: 61:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 61: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0.01]
Epoch: 62:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 62: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0.01]
Epoch: 63:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 63: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0.01]
Epoch: 64:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 64: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0.01]
Epoch: 65:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 65: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0.01]
Epoch: 66:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 66: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0.01]
Epoch: 67:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 67: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0.01]
Epoch: 68:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 68: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0.01]
Epoch: 69:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 69: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0.01]
Epoch: 70:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 70: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0.01]
Epoch: 71:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 71: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0.01]
Epoch: 72:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 72: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0.01]
Epoch: 73:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 73: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0.01]
Epoch: 74:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 74: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0.01]
Epoch: 75:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 75: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0.01]
Epoch: 76:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 76: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0.01]
Epoch: 77:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 77: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0.01]
Epoch: 78:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 78: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0.01]
Epoch: 79:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 79: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 80:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 80: 100% 213/213 [00:10<00:00, 20.17it/s, loss=0.01]
Epoch: 81:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 81: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0]
Epoch: 82:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 82: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 83:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 83: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 84:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 84: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 85:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 85: 100% 213/213 [00:10<00:00, 20.18it/s, loss=0]
Epoch: 86:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 86: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 87:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 87: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0.01]
Epoch: 88:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 88: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0.01]
Epoch: 89:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 89: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0.01]
Epoch: 90:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 90: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0]
Epoch: 91:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 91: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0.01]
Epoch: 92:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 92: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 93:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 93: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 94:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 94: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 95:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 95: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0]
Epoch: 96:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 96: 100% 213/213 [00:10<00:00, 20.16it/s, loss=0]
Epoch: 97:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 97: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0]
Epoch: 98:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 98: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 99:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 99: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 100:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 100: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 101:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 101: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 102:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 102: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 103:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 103: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 104:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 104: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 105:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 105: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 106:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 106: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 107:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 107: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 108:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 108: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 109:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 109: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 110:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 110: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 111:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 111: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 112:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 112: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 113:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 113: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 114:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 114: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 115:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 115: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 116:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 116: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 117:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 117: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 118:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 118: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.01]
Epoch: 119:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 119: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 120:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 120: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 121:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 121: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 122:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 122: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 123:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 123: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 124:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 124: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 125:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 125: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 126:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 126: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 127:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 127: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 128:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 128: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0]
Epoch: 129:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 129: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 130:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 130: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 131:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 131: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 132:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 132: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 133:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 133: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 134:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 134: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 135:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 135: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 136:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 136: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 137:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 137: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 138:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 138: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 139:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 139: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 140:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 140: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 141:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 141: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 142:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 142: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 143:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 143: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 144:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 144: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 145:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 145: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0]
Epoch: 146:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 146: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 147:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 147: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 148:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 148: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 149:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 149: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 150:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 150: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 151:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 151: 100% 213/213 [00:10<00:00, 20.39it/s, loss=0]
Epoch: 152:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 152: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 153:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 153: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 154:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 154: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 155:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 155: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 156:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 156: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 157:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 157: 100% 213/213 [00:10<00:00, 20.39it/s, loss=0]
Epoch: 158:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 158: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 159:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 159: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 160:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 160: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 161:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 161: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 162:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 162: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 163:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 163: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 164:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 164: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 165:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 165: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 166:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 166: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 167:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 167: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 168:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 168: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 169:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 169: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 170:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 170: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 171:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 171: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 172:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 172: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 173:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 173: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 174:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 174: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 175:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 175: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 176:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 176: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 177:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 177: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 178:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 178: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 179:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 179: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 180:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 180: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 181:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 181: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 182:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 182: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 183:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 183: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0]
Epoch: 184:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 184: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 185:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 185: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0]
Epoch: 186:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 186: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 187:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 187: 100% 213/213 [00:10<00:00, 20.19it/s, loss=0]
Epoch: 188:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 188: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 189:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 189: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 190:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 190: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 191:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 191: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 192:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 192: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 193:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 193: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 194:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 194: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 195:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 195: 100% 213/213 [00:10<00:00, 20.19it/s, loss=0]
Epoch: 196:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 196: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 197:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 197: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 198:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 198: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 199:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 199: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 200:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 200: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 201:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 201: 100% 213/213 [00:10<00:00, 20.12it/s, loss=0]
Epoch: 202:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 202: 100% 213/213 [00:10<00:00, 20.17it/s, loss=0]
Epoch: 203:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 203: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 204:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 204: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 205:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 205: 100% 213/213 [00:10<00:00, 20.14it/s, loss=0]
Epoch: 206:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 206: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 207:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 207: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 208:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 208: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 209:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 209: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 210:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 210: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 211:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 211: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 212:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 212: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 213:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 213: 100% 213/213 [00:10<00:00, 20.17it/s, loss=0]
Epoch: 214:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 214: 100% 213/213 [00:10<00:00, 20.14it/s, loss=0]
Epoch: 215:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 215: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 216:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 216: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 217:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 217: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 218:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 218: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 219:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 219: 100% 213/213 [00:10<00:00, 20.19it/s, loss=0]
Epoch: 220:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 220: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 221:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 221: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 222:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 222: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 223:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 223: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 224:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 224: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 225:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 225: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 226:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 226: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 227:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 227: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 228:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 228: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 229:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 229: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0]
Epoch: 230:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 230: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 231:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 231: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 232:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 232: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 233:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 233: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 234:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 234: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 235:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 235: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 236:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 236: 100% 213/213 [00:10<00:00, 20.15it/s, loss=0]
Epoch: 237:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 237: 100% 213/213 [00:10<00:00, 20.18it/s, loss=0]
Epoch: 238:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 238: 100% 213/213 [00:10<00:00, 20.14it/s, loss=0]
Epoch: 239:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 239: 100% 213/213 [00:10<00:00, 20.07it/s, loss=0]
Epoch: 240:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 240: 100% 213/213 [00:10<00:00, 20.05it/s, loss=0]
Epoch: 241:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 241: 100% 213/213 [00:10<00:00, 20.03it/s, loss=0]
Epoch: 242:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 242: 100% 213/213 [00:10<00:00, 20.06it/s, loss=0]
Epoch: 243:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 243: 100% 213/213 [00:10<00:00, 20.04it/s, loss=0]
Epoch: 244:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 244: 100% 213/213 [00:10<00:00, 20.10it/s, loss=0]
Epoch: 245:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 245: 100% 213/213 [00:10<00:00, 20.04it/s, loss=0]
Epoch: 246:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 246: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 247:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 247: 100% 213/213 [00:10<00:00, 19.99it/s, loss=0]
Epoch: 248:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 248: 100% 213/213 [00:10<00:00, 20.19it/s, loss=0]
Epoch: 249:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 249: 100% 213/213 [00:10<00:00, 20.16it/s, loss=0]
Epoch: 250:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 250: 100% 213/213 [00:10<00:00, 20.11it/s, loss=0]
Epoch: 251:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 251: 100% 213/213 [00:10<00:00, 20.11it/s, loss=0]
Epoch: 252:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 252: 100% 213/213 [00:10<00:00, 20.13it/s, loss=0]
Epoch: 253:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 253: 100% 213/213 [00:10<00:00, 20.14it/s, loss=0]
Epoch: 254:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 254: 100% 213/213 [00:10<00:00, 20.13it/s, loss=0]
Epoch: 255:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 255: 100% 213/213 [00:10<00:00, 20.09it/s, loss=0]
Epoch: 256:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 256: 100% 213/213 [00:10<00:00, 20.16it/s, loss=0]
Epoch: 257:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 257: 100% 213/213 [00:10<00:00, 20.10it/s, loss=0]
Epoch: 258:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 258: 100% 213/213 [00:10<00:00, 20.09it/s, loss=0]
Epoch: 259:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 259: 100% 213/213 [00:10<00:00, 20.10it/s, loss=0]
Epoch: 260:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 260: 100% 213/213 [00:10<00:00, 19.98it/s, loss=0]
Epoch: 261:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 261: 100% 213/213 [00:10<00:00, 19.75it/s, loss=0]
Epoch: 262:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 262: 100% 213/213 [00:10<00:00, 19.86it/s, loss=0]
Epoch: 263:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 263: 100% 213/213 [00:10<00:00, 19.86it/s, loss=0]
Epoch: 264:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 264: 100% 213/213 [00:10<00:00, 19.90it/s, loss=0]
Epoch: 265:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 265: 100% 213/213 [00:10<00:00, 19.98it/s, loss=0]
Epoch: 266:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 266: 100% 213/213 [00:10<00:00, 20.16it/s, loss=0]
Epoch: 267:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 267: 100% 213/213 [00:10<00:00, 20.16it/s, loss=0]
Epoch: 268:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 268: 100% 213/213 [00:10<00:00, 20.14it/s, loss=0]
Epoch: 269:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 269: 100% 213/213 [00:10<00:00, 20.10it/s, loss=0]
Epoch: 270:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 270: 100% 213/213 [00:10<00:00, 20.17it/s, loss=0]
Epoch: 271:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 271: 100% 213/213 [00:10<00:00, 19.84it/s, loss=0]
Epoch: 272:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 272: 100% 213/213 [00:10<00:00, 19.97it/s, loss=0]
Epoch: 273:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 273: 100% 213/213 [00:10<00:00, 20.01it/s, loss=0]
Epoch: 274:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 274: 100% 213/213 [00:10<00:00, 19.94it/s, loss=0]
Epoch: 275:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 275: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 276:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 276: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 277:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 277: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 278:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 278: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 279:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 279: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 280:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 280: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 281:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 281: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 282:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 282: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 283:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 283: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 284:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 284: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 285:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 285: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 286:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 286: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 287:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 287: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 288:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 288: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 289:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 289: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 290:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 290: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 291:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 291: 100% 213/213 [00:10<00:00, 20.14it/s, loss=0]
Epoch: 292:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 292: 100% 213/213 [00:10<00:00, 20.05it/s, loss=0]
Epoch: 293:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 293: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0]
Epoch: 294:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 294: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 295:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 295: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 296:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 296: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0]
Epoch: 297:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 297: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0]
Epoch: 298:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 298: 100% 213/213 [00:10<00:00, 20.14it/s, loss=0]
Epoch: 299:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 299: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 100% 300/300 [52:53<00:00, 10.58s/it]
Iteration: 100% 24/24 [00:10<00:00,  2.35it/s, acc=1]
Iteration: 100% 165/165 [20:54<00:00,  7.61s/it, acc=0.874]
obj_pp_to_subj_pp: 14.8
cp_recursion: 52.2
pp_recursion: 43.9
subj_to_obj_proper: 95.5
prim_to_obj_proper: 95.5
prim_to_subj_proper: 99.9
LEX: 95.52000000000001
OVERALL: 87.36190476190477
In [ ]:
# extract and move the error logs
!echo "actual	expected	input" > wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!cat wu_et_al_2023_recogs_baseline_for_error_analysis.log | grep obj_pp_to_subj_pp | sed -E 's/INFO:root:Mistake \(category obj_pp_to_subj_pp\)://g' | sed -E 's/, Expected: /	/g' | sed -E 's/, input: /	/g' | sed -E "s/'//g" >> wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!head -n 10 wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!mv wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_42.tsv
actual	expected	input
 * baby ( 1 ) ; tray ( 4 ) ; * house ( 7 ) ; nmod . on ( 1 , 4 ) AND scream ( 8 ) AND theme ( 8 , 1 ) AND agent ( 8 , 7 )	* baby ( 1 ) ; tray ( 4 ) ; * house ( 7 ) ; nmod . on ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND scream ( 8 ) AND agent ( 8 , 1 )	The baby on a tray in the house screamed .
 * spokesman ( 1 ) ; * house ( 4 ) ; Emma ( 6 ) ; * rose ( 8 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 6 ) AND theme ( 5 , 8 )	* spokesman ( 1 ) ; * house ( 4 ) ; Emma ( 6 ) ; * rose ( 8 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )	The spokesman in the house served Emma the rose .
 * donut ( 1 ) ; * mound ( 4 ) ; child ( 9 ) ; * computer ( 12 ) ; nmod . on ( 1 , 4 ) AND slide ( 6 ) AND theme ( 6 , 4 ) AND agent ( 6 , 9 ) AND nmod . on ( 9 , 12 )	* donut ( 1 ) ; * mound ( 4 ) ; child ( 9 ) ; * computer ( 12 ) ; nmod . on ( 1 , 4 ) AND slide ( 6 ) AND theme ( 6 , 1 ) AND agent ( 6 , 9 ) AND nmod . on ( 9 , 12 )	The donut on the mound was slid by a child on the computer .
 * dog ( 1 ) ; bakery ( 4 ) ; * bag ( 7 ) ; nmod . in ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND sneeze ( 8 ) AND theme ( 8 , 1 ) AND agent ( 8 , 7 )	* dog ( 1 ) ; bakery ( 4 ) ; * bag ( 7 ) ; nmod . in ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND sneeze ( 8 ) AND agent ( 8 , 1 )	The dog in a bakery in the bag sneezed .
 girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 7 ) AND theme ( 8 , 10 )	girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )	A girl on the stool on the table drew a frog .
 donut ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 )	donut ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND grow ( 5 ) AND theme ( 5 , 1 )	A donut on a table grew .
 * cake ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND paint ( 6 ) AND theme ( 6 , 1 ) AND agent ( 6 , 4 )	* cake ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND paint ( 6 ) AND theme ( 6 , 1 )	The cake in the house was painted .
 * sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND theme ( 5 , 1 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )	* sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )	The sailor in a house lended a biscuit on a table to a goose .
 visitor ( 1 ) ; * pile ( 4 ) ; resident ( 7 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 7 )	visitor ( 1 ) ; * pile ( 4 ) ; resident ( 7 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )	A visitor in the pile rolled a resident .
In [ ]:
!mv wu_et_al_2023_recogs_baseline_for_error_analysis.log wu_et_al_2023_recogs_baseline_for_error_analysis_seed_42.log
In [ ]:
# baseline Wu et al 2023 model and baseline data
!python run_cogs.py --model_name ende_transformer --use_iiem --gpu 1 --train_batch_size 128 --eval_batch_size 128 --lr 0.0001 --data_path ./recogs_positional_index --output_dir ./results_recogs_positional_index_control --lfs cogs --do_train --do_test --do_gen --max_seq_len 512 --output_json --epochs 300 --seeds "66"
EncoderDecoderModel has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
  - If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
  - If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
  - If you are not the owner of the model architecture class, please contact the model code owner to update it.
Epoch: 0:   0% 0/213 [00:00<?, ?it/s]We strongly recommend passing in an `attention_mask` since your input_ids may be padded. See https://huggingface.co/docs/transformers/troubleshooting#incorrect-output-when-padding-tokens-arent-masked.
/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 0: 100% 213/213 [00:11<00:00, 18.55it/s, loss=5.99]
Epoch: 1:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 1: 100% 213/213 [00:10<00:00, 20.15it/s, loss=4.61]
Epoch: 2:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 2: 100% 213/213 [00:10<00:00, 20.08it/s, loss=3.61]
Epoch: 3:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 3: 100% 213/213 [00:10<00:00, 19.89it/s, loss=2.53]
Epoch: 4:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 4: 100% 213/213 [00:10<00:00, 19.80it/s, loss=1.96]
Epoch: 5:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 5: 100% 213/213 [00:10<00:00, 20.01it/s, loss=1.6]
Epoch: 6:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 6: 100% 213/213 [00:10<00:00, 20.29it/s, loss=1.32]
Epoch: 7:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 7: 100% 213/213 [00:10<00:00, 20.33it/s, loss=1.13]
Epoch: 8:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 8: 100% 213/213 [00:10<00:00, 20.28it/s, loss=1]
Epoch: 9:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 9: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0.9]
Epoch: 10:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 10: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0.8]
Epoch: 11:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 11: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0.71]
Epoch: 12:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 12: 100% 213/213 [00:10<00:00, 20.11it/s, loss=0.63]
Epoch: 13:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 13: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0.54]
Epoch: 14:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 14: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0.47]
Epoch: 15:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 15: 100% 213/213 [00:10<00:00, 20.05it/s, loss=0.41]
Epoch: 16:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 16: 100% 213/213 [00:10<00:00, 20.12it/s, loss=0.36]
Epoch: 17:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 17: 100% 213/213 [00:10<00:00, 20.15it/s, loss=0.31]
Epoch: 18:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 18: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0.27]
Epoch: 19:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 19: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0.23]
Epoch: 20:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 20: 100% 213/213 [00:10<00:00, 20.07it/s, loss=0.21]
Epoch: 21:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 21: 100% 213/213 [00:10<00:00, 20.09it/s, loss=0.17]
Epoch: 22:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 22: 100% 213/213 [00:10<00:00, 20.06it/s, loss=0.15]
Epoch: 23:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 23: 100% 213/213 [00:10<00:00, 20.09it/s, loss=0.13]
Epoch: 24:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 24: 100% 213/213 [00:10<00:00, 20.09it/s, loss=0.12]
Epoch: 25:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 25: 100% 213/213 [00:10<00:00, 20.11it/s, loss=0.11]
Epoch: 26:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 26: 100% 213/213 [00:10<00:00, 20.09it/s, loss=0.11]
Epoch: 27:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 27: 100% 213/213 [00:10<00:00, 20.10it/s, loss=0.09]
Epoch: 28:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 28: 100% 213/213 [00:10<00:00, 20.18it/s, loss=0.09]
Epoch: 29:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 29: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0.08]
Epoch: 30:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 30: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0.07]
Epoch: 31:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 31: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0.06]
Epoch: 32:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 32: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0.05]
Epoch: 33:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 33: 100% 213/213 [00:10<00:00, 20.16it/s, loss=0.05]
Epoch: 34:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 34: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0.05]
Epoch: 35:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 35: 100% 213/213 [00:10<00:00, 20.17it/s, loss=0.04]
Epoch: 36:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 36: 100% 213/213 [00:10<00:00, 20.03it/s, loss=0.04]
Epoch: 37:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 37: 100% 213/213 [00:10<00:00, 19.95it/s, loss=0.04]
Epoch: 38:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 38: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0.04]
Epoch: 39:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 39: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0.03]
Epoch: 40:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 40: 100% 213/213 [00:10<00:00, 20.15it/s, loss=0.03]
Epoch: 41:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 41: 100% 213/213 [00:10<00:00, 20.12it/s, loss=0.03]
Epoch: 42:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 42: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0.03]
Epoch: 43:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 43: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0.03]
Epoch: 44:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 44: 100% 213/213 [00:10<00:00, 20.19it/s, loss=0.02]
Epoch: 45:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 45: 100% 213/213 [00:10<00:00, 20.01it/s, loss=0.02]
Epoch: 46:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 46: 100% 213/213 [00:10<00:00, 19.96it/s, loss=0.02]
Epoch: 47:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 47: 100% 213/213 [00:10<00:00, 20.01it/s, loss=0.02]
Epoch: 48:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 48: 100% 213/213 [00:10<00:00, 19.99it/s, loss=0.02]
Epoch: 49:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 49: 100% 213/213 [00:10<00:00, 20.08it/s, loss=0.02]
Epoch: 50:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 50: 100% 213/213 [00:10<00:00, 20.04it/s, loss=0.02]
Epoch: 51:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 51: 100% 213/213 [00:10<00:00, 20.18it/s, loss=0.03]
Epoch: 52:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 52: 100% 213/213 [00:10<00:00, 20.01it/s, loss=0.02]
Epoch: 53:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 53: 100% 213/213 [00:10<00:00, 20.10it/s, loss=0.01]
Epoch: 54:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 54: 100% 213/213 [00:10<00:00, 20.17it/s, loss=0.02]
Epoch: 55:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 55: 100% 213/213 [00:10<00:00, 20.06it/s, loss=0.01]
Epoch: 56:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 56: 100% 213/213 [00:10<00:00, 20.15it/s, loss=0.01]
Epoch: 57:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 57: 100% 213/213 [00:10<00:00, 19.98it/s, loss=0.01]
Epoch: 58:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 58: 100% 213/213 [00:10<00:00, 19.63it/s, loss=0.01]
Epoch: 59:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 59: 100% 213/213 [00:10<00:00, 19.82it/s, loss=0.01]
Epoch: 60:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 60: 100% 213/213 [00:10<00:00, 20.12it/s, loss=0.01]
Epoch: 61:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 61: 100% 213/213 [00:10<00:00, 20.11it/s, loss=0.01]
Epoch: 62:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 62: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.01]
Epoch: 63:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 63: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0.01]
Epoch: 64:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 64: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0.01]
Epoch: 65:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 65: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0.01]
Epoch: 66:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 66: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0.01]
Epoch: 67:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 67: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0.01]
Epoch: 68:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 68: 100% 213/213 [00:10<00:00, 20.18it/s, loss=0.01]
Epoch: 69:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 69: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 70:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 70: 100% 213/213 [00:10<00:00, 20.01it/s, loss=0.01]
Epoch: 71:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 71: 100% 213/213 [00:10<00:00, 19.78it/s, loss=0.01]
Epoch: 72:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 72: 100% 213/213 [00:10<00:00, 19.75it/s, loss=0.01]
Epoch: 73:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 73: 100% 213/213 [00:10<00:00, 20.02it/s, loss=0.01]
Epoch: 74:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 74: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0.01]
Epoch: 75:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 75: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0.01]
Epoch: 76:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 76: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0.01]
Epoch: 77:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 77: 100% 213/213 [00:10<00:00, 20.19it/s, loss=0.01]
Epoch: 78:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 78: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 79:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 79: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0.01]
Epoch: 80:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 80: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 81:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 81: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0.01]
Epoch: 82:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 82: 100% 213/213 [00:10<00:00, 20.19it/s, loss=0.01]
Epoch: 83:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 83: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0.01]
Epoch: 84:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 84: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 85:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 85: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0]
Epoch: 86:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 86: 100% 213/213 [00:10<00:00, 20.17it/s, loss=0.01]
Epoch: 87:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 87: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0]
Epoch: 88:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 88: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0.01]
Epoch: 89:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 89: 100% 213/213 [00:10<00:00, 19.86it/s, loss=0]
Epoch: 90:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 90: 100% 213/213 [00:10<00:00, 19.75it/s, loss=0]
Epoch: 91:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 91: 100% 213/213 [00:10<00:00, 19.89it/s, loss=0]
Epoch: 92:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 92: 100% 213/213 [00:10<00:00, 19.81it/s, loss=0]
Epoch: 93:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 93: 100% 213/213 [00:10<00:00, 19.87it/s, loss=0]
Epoch: 94:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 94: 100% 213/213 [00:10<00:00, 19.98it/s, loss=0]
Epoch: 95:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 95: 100% 213/213 [00:10<00:00, 20.08it/s, loss=0.01]
Epoch: 96:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 96: 100% 213/213 [00:10<00:00, 20.13it/s, loss=0]
Epoch: 97:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 97: 100% 213/213 [00:10<00:00, 20.17it/s, loss=0]
Epoch: 98:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 98: 100% 213/213 [00:10<00:00, 20.18it/s, loss=0]
Epoch: 99:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 99: 100% 213/213 [00:10<00:00, 20.11it/s, loss=0.01]
Epoch: 100:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 100: 100% 213/213 [00:10<00:00, 19.93it/s, loss=0]
Epoch: 101:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 101: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 102:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 102: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 103:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 103: 100% 213/213 [00:10<00:00, 20.09it/s, loss=0]
Epoch: 104:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 104: 100% 213/213 [00:10<00:00, 19.93it/s, loss=0]
Epoch: 105:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 105: 100% 213/213 [00:10<00:00, 19.95it/s, loss=0]
Epoch: 106:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 106: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 107:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 107: 100% 213/213 [00:10<00:00, 20.16it/s, loss=0]
Epoch: 108:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 108: 100% 213/213 [00:10<00:00, 20.17it/s, loss=0]
Epoch: 109:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 109: 100% 213/213 [00:10<00:00, 20.15it/s, loss=0]
Epoch: 110:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 110: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0]
Epoch: 111:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 111: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 112:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 112: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 113:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 113: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 114:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 114: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 115:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 115: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 116:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 116: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 117:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 117: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 118:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 118: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 119:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 119: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 120:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 120: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 121:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 121: 100% 213/213 [00:10<00:00, 20.16it/s, loss=0]
Epoch: 122:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 122: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 123:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 123: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 124:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 124: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 125:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 125: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 126:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 126: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 127:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 127: 100% 213/213 [00:10<00:00, 20.15it/s, loss=0]
Epoch: 128:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 128: 100% 213/213 [00:10<00:00, 20.16it/s, loss=0]
Epoch: 129:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 129: 100% 213/213 [00:10<00:00, 20.12it/s, loss=0]
Epoch: 130:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 130: 100% 213/213 [00:10<00:00, 19.96it/s, loss=0]
Epoch: 131:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 131: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 132:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 132: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 133:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 133: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 134:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 134: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 135:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 135: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0]
Epoch: 136:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 136: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0]
Epoch: 137:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 137: 100% 213/213 [00:10<00:00, 20.19it/s, loss=0]
Epoch: 138:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 138: 100% 213/213 [00:10<00:00, 20.15it/s, loss=0]
Epoch: 139:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 139: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 140:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 140: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 141:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 141: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 142:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 142: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0]
Epoch: 143:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 143: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0]
Epoch: 144:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 144: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 145:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 145: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 146:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 146: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 147:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 147: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 148:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 148: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 149:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 149: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 150:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 150: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 151:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 151: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 152:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 152: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 153:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 153: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 154:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 154: 100% 213/213 [00:10<00:00, 20.13it/s, loss=0]
Epoch: 155:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 155: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 156:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 156: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0]
Epoch: 157:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 157: 100% 213/213 [00:10<00:00, 20.19it/s, loss=0]
Epoch: 158:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 158: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 159:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 159: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 160:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 160: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 161:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 161: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 162:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 162: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 163:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 163: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 164:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 164: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 165:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 165: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0]
Epoch: 166:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 166: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 167:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 167: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 168:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 168: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 169:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 169: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 170:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 170: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 171:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 171: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 172:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 172: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 173:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 173: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 174:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 174: 100% 213/213 [00:10<00:00, 20.11it/s, loss=0]
Epoch: 175:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 175: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 176:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 176: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 177:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 177: 100% 213/213 [00:10<00:00, 20.19it/s, loss=0]
Epoch: 178:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 178: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0]
Epoch: 179:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 179: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 180:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 180: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 181:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 181: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 182:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 182: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 183:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 183: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 184:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 184: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 185:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 185: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 186:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 186: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 187:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 187: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 188:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 188: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 189:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 189: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 190:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 190: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0]
Epoch: 191:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 191: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 192:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 192: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 193:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 193: 100% 213/213 [00:10<00:00, 19.99it/s, loss=0]
Epoch: 194:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 194: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 195:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 195: 100% 213/213 [00:10<00:00, 20.13it/s, loss=0]
Epoch: 196:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 196: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 197:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 197: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 198:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 198: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 199:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 199: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 200:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 200: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 201:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 201: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 202:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 202: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 203:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 203: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 204:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 204: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0]
Epoch: 205:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 205: 100% 213/213 [00:10<00:00, 20.18it/s, loss=0]
Epoch: 206:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 206: 100% 213/213 [00:10<00:00, 20.14it/s, loss=0]
Epoch: 207:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 207: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 208:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 208: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0]
Epoch: 209:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 209: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0]
Epoch: 210:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 210: 100% 213/213 [00:10<00:00, 20.17it/s, loss=0]
Epoch: 211:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 211: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0]
Epoch: 212:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 212: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 213:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 213: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0]
Epoch: 214:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 214: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0]
Epoch: 215:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 215: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 216:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 216: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 217:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 217: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0]
Epoch: 218:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 218: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0]
Epoch: 219:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 219: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 220:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 220: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 221:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 221: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 222:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 222: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 223:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 223: 100% 213/213 [00:10<00:00, 20.05it/s, loss=0]
Epoch: 224:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 224: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 225:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 225: 100% 213/213 [00:10<00:00, 20.15it/s, loss=0]
Epoch: 226:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 226: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0]
Epoch: 227:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 227: 100% 213/213 [00:10<00:00, 20.16it/s, loss=0]
Epoch: 228:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 228: 100% 213/213 [00:10<00:00, 20.15it/s, loss=0]
Epoch: 229:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 229: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0]
Epoch: 230:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 230: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 231:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 231: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 232:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 232: 100% 213/213 [00:10<00:00, 20.19it/s, loss=0]
Epoch: 233:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 233: 100% 213/213 [00:10<00:00, 20.18it/s, loss=0]
Epoch: 234:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 234: 100% 213/213 [00:10<00:00, 20.13it/s, loss=0]
Epoch: 235:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 235: 100% 213/213 [00:10<00:00, 20.13it/s, loss=0]
Epoch: 236:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 236: 100% 213/213 [00:10<00:00, 20.18it/s, loss=0]
Epoch: 237:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 237: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 238:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 238: 100% 213/213 [00:10<00:00, 20.18it/s, loss=0]
Epoch: 239:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 239: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 240:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 240: 100% 213/213 [00:10<00:00, 20.17it/s, loss=0]
Epoch: 241:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 241: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 242:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 242: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 243:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 243: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0]
Epoch: 244:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 244: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0]
Epoch: 245:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 245: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0]
Epoch: 246:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 246: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0]
Epoch: 247:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 247: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0]
Epoch: 248:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 248: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 249:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 249: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 250:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 250: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 251:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 251: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 252:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 252: 100% 213/213 [00:10<00:00, 19.97it/s, loss=0]
Epoch: 253:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 253: 100% 213/213 [00:10<00:00, 20.06it/s, loss=0]
Epoch: 254:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 254: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 255:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 255: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 256:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 256: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 257:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 257: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 258:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 258: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 259:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 259: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 260:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 260: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 261:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 261: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 262:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 262: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 263:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 263: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 264:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 264: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 265:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 265: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 266:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 266: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 267:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 267: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 268:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 268: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 269:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 269: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 270:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 270: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 271:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 271: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 272:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 272: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 273:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 273: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0]
Epoch: 274:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 274: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 275:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 275: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 276:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 276: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 277:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 277: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 278:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 278: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 279:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 279: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 280:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 280: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 281:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 281: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 282:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 282: 100% 213/213 [00:10<00:00, 20.19it/s, loss=0]
Epoch: 283:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 283: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 284:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 284: 100% 213/213 [00:10<00:00, 20.09it/s, loss=0]
Epoch: 285:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 285: 100% 213/213 [00:10<00:00, 20.16it/s, loss=0]
Epoch: 286:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 286: 100% 213/213 [00:10<00:00, 20.15it/s, loss=0]
Epoch: 287:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 287: 100% 213/213 [00:10<00:00, 20.11it/s, loss=0]
Epoch: 288:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 288: 100% 213/213 [00:10<00:00, 20.11it/s, loss=0]
Epoch: 289:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 289: 100% 213/213 [00:10<00:00, 20.18it/s, loss=0]
Epoch: 290:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 290: 100% 213/213 [00:10<00:00, 20.18it/s, loss=0]
Epoch: 291:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 291: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 292:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 292: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0]
Epoch: 293:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 293: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 294:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 294: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0]
Epoch: 295:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 295: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 296:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 296: 100% 213/213 [00:10<00:00, 20.15it/s, loss=0]
Epoch: 297:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 297: 100% 213/213 [00:10<00:00, 20.19it/s, loss=0]
Epoch: 298:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 298: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0]
Epoch: 299:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 299: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0]
Epoch: 100% 300/300 [53:05<00:00, 10.62s/it]
Iteration: 100% 24/24 [00:10<00:00,  2.33it/s, acc=1]
Iteration: 100% 165/165 [20:26<00:00,  7.44s/it, acc=0.898]
obj_pp_to_subj_pp: 19.7
cp_recursion: 53.6
pp_recursion: 42.4
subj_to_obj_proper: 90.5
prim_to_obj_proper: 86.1
prim_to_subj_proper: 100.0
LEX: 99.63333333333333
OVERALL: 89.84761904761905
In [ ]:
# extract and move the error logs
!echo "actual	expected	input" > wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!cat wu_et_al_2023_recogs_baseline_for_error_analysis.log | grep obj_pp_to_subj_pp | sed -E 's/INFO:root:Mistake \(category obj_pp_to_subj_pp\)://g' | sed -E 's/, Expected: /	/g' | sed -E 's/, input: /	/g' | sed -E "s/'//g" >> wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!head -n 10 wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!mv wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_66.tsv
actual	expected	input
 * baby ( 1 ) ; tray ( 4 ) ; * house ( 7 ) ; nmod . on ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND scream ( 8 ) AND theme ( 8 , 1 ) AND agent ( 8 , 7 )	* baby ( 1 ) ; tray ( 4 ) ; * house ( 7 ) ; nmod . on ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND scream ( 8 ) AND agent ( 8 , 1 )	The baby on a tray in the house screamed .
 * spokesman ( 1 ) ; * house ( 4 ) ; Emma ( 6 ) ; * rose ( 8 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 6 ) AND recipient ( 5 , 8 )	* spokesman ( 1 ) ; * house ( 4 ) ; Emma ( 6 ) ; * rose ( 8 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )	The spokesman in the house served Emma the rose .
 donkey ( 1 ) ; * room ( 4 ) ; Ella ( 6 ) ; donut ( 8 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 6 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )	donkey ( 1 ) ; * room ( 4 ) ; Ella ( 6 ) ; donut ( 8 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )	A donkey in the room sold Ella a donut .
 cat ( 1 ) ; * house ( 4 ) ; * cake ( 8 ) ; * boy ( 11 ) ; table ( 14 ) ; nmod . in ( 1 , 4 ) AND offer ( 6 ) AND theme ( 6 , 1 ) AND recipient ( 6 , 4 ) AND agent ( 6 , 8 ) AND nmod . beside ( 8 , 14 )	cat ( 1 ) ; * house ( 4 ) ; * cake ( 8 ) ; * boy ( 11 ) ; table ( 14 ) ; nmod . in ( 1 , 4 ) AND offer ( 6 ) AND recipient ( 6 , 1 ) AND theme ( 6 , 8 ) AND agent ( 6 , 11 ) AND nmod . beside ( 11 , 14 )	A cat in the house was offered the cake by the boy beside a table .
 * dog ( 1 ) ; bakery ( 4 ) ; * bag ( 7 ) ; nmod . in ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND sneeze ( 8 ) AND theme ( 8 , 1 ) AND agent ( 8 , 7 )	* dog ( 1 ) ; bakery ( 4 ) ; * bag ( 7 ) ; nmod . in ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND sneeze ( 8 ) AND agent ( 8 , 1 )	The dog in a bakery in the bag sneezed .
 girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND theme ( 8 , 1 ) AND agent ( 8 , 10 )	girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )	A girl on the stool on the table drew a frog .
 donut ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 )	donut ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND grow ( 5 ) AND theme ( 5 , 1 )	A donut on a table grew .
 * cake ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND paint ( 6 ) AND theme ( 6 , 1 ) AND agent ( 6 , 4 )	* cake ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND paint ( 6 ) AND theme ( 6 , 1 )	The cake in the house was painted .
 * sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )	* sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )	The sailor in a house lended a biscuit on a table to a goose .
In [ ]:
!mv wu_et_al_2023_recogs_baseline_for_error_analysis.log wu_et_al_2023_recogs_baseline_for_error_analysis_seed_66.log
In [ ]:
# baseline Wu et al 2023 model and baseline data
!python run_cogs.py --model_name ende_transformer --use_iiem --gpu 1 --train_batch_size 128 --eval_batch_size 128 --lr 0.0001 --data_path ./recogs_positional_index --output_dir ./results_recogs_positional_index_control --lfs cogs --do_train --do_test --do_gen --max_seq_len 512 --output_json --epochs 300 --seeds "77"
EncoderDecoderModel has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
  - If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
  - If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
  - If you are not the owner of the model architecture class, please contact the model code owner to update it.
Epoch: 0:   0% 0/213 [00:00<?, ?it/s]We strongly recommend passing in an `attention_mask` since your input_ids may be padded. See https://huggingface.co/docs/transformers/troubleshooting#incorrect-output-when-padding-tokens-arent-masked.
/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 0: 100% 213/213 [00:11<00:00, 18.49it/s, loss=6.02]
Epoch: 1:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 1: 100% 213/213 [00:10<00:00, 20.31it/s, loss=4.68]
Epoch: 2:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 2: 100% 213/213 [00:10<00:00, 20.08it/s, loss=3.62]
Epoch: 3:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 3: 100% 213/213 [00:10<00:00, 20.00it/s, loss=2.5]
Epoch: 4:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 4: 100% 213/213 [00:10<00:00, 20.27it/s, loss=1.96]
Epoch: 5:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 5: 100% 213/213 [00:10<00:00, 20.29it/s, loss=1.62]
Epoch: 6:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 6: 100% 213/213 [00:10<00:00, 20.26it/s, loss=1.36]
Epoch: 7:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 7: 100% 213/213 [00:10<00:00, 20.27it/s, loss=1.18]
Epoch: 8:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 8: 100% 213/213 [00:10<00:00, 20.14it/s, loss=1.06]
Epoch: 9:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 9: 100% 213/213 [00:10<00:00, 20.15it/s, loss=0.95]
Epoch: 10:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 10: 100% 213/213 [00:10<00:00, 20.14it/s, loss=0.86]
Epoch: 11:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 11: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0.77]
Epoch: 12:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 12: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0.69]
Epoch: 13:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 13: 100% 213/213 [00:10<00:00, 20.19it/s, loss=0.62]
Epoch: 14:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 14: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0.55]
Epoch: 15:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 15: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0.49]
Epoch: 16:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 16: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0.43]
Epoch: 17:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 17: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0.37]
Epoch: 18:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 18: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0.32]
Epoch: 19:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 19: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0.29]
Epoch: 20:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 20: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0.25]
Epoch: 21:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 21: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0.22]
Epoch: 22:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 22: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0.2]
Epoch: 23:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 23: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0.18]
Epoch: 24:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 24: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0.16]
Epoch: 25:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 25: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0.14]
Epoch: 26:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 26: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0.13]
Epoch: 27:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 27: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0.11]
Epoch: 28:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 28: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.11]
Epoch: 29:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 29: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.1]
Epoch: 30:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 30: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0.1]
Epoch: 31:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 31: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0.08]
Epoch: 32:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 32: 100% 213/213 [00:10<00:00, 20.17it/s, loss=0.07]
Epoch: 33:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 33: 100% 213/213 [00:10<00:00, 20.12it/s, loss=0.07]
Epoch: 34:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 34: 100% 213/213 [00:10<00:00, 20.14it/s, loss=0.07]
Epoch: 35:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 35: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0.06]
Epoch: 36:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 36: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0.06]
Epoch: 37:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 37: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0.05]
Epoch: 38:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 38: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0.05]
Epoch: 39:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 39: 100% 213/213 [00:10<00:00, 20.14it/s, loss=0.04]
Epoch: 40:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 40: 100% 213/213 [00:10<00:00, 20.19it/s, loss=0.05]
Epoch: 41:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 41: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0.04]
Epoch: 42:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 42: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0.03]
Epoch: 43:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 43: 100% 213/213 [00:10<00:00, 20.18it/s, loss=0.04]
Epoch: 44:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 44: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0.03]
Epoch: 45:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 45: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0.03]
Epoch: 46:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 46: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0.03]
Epoch: 47:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 47: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0.03]
Epoch: 48:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 48: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0.02]
Epoch: 49:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 49: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0.02]
Epoch: 50:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 50: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0.03]
Epoch: 51:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 51: 100% 213/213 [00:10<00:00, 20.09it/s, loss=0.02]
Epoch: 52:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 52: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0.03]
Epoch: 53:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 53: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0.02]
Epoch: 54:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 54: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0.02]
Epoch: 55:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 55: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0.02]
Epoch: 56:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 56: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0.02]
Epoch: 57:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 57: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0.02]
Epoch: 58:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 58: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0.02]
Epoch: 59:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 59: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0.02]
Epoch: 60:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 60: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0.02]
Epoch: 61:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 61: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0.01]
Epoch: 62:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 62: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0.01]
Epoch: 63:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 63: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.02]
Epoch: 64:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 64: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0.02]
Epoch: 65:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 65: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0.01]
Epoch: 66:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 66: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0.01]
Epoch: 67:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 67: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0.01]
Epoch: 68:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 68: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0.01]
Epoch: 69:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 69: 100% 213/213 [00:10<00:00, 20.17it/s, loss=0.01]
Epoch: 70:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 70: 100% 213/213 [00:10<00:00, 20.17it/s, loss=0.01]
Epoch: 71:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 71: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0.01]
Epoch: 72:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 72: 100% 213/213 [00:10<00:00, 20.08it/s, loss=0.01]
Epoch: 73:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 73: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0.01]
Epoch: 74:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 74: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0.02]
Epoch: 75:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 75: 100% 213/213 [00:10<00:00, 20.18it/s, loss=0.01]
Epoch: 76:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 76: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.01]
Epoch: 77:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 77: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0.01]
Epoch: 78:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 78: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0.01]
Epoch: 79:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 79: 100% 213/213 [00:10<00:00, 20.09it/s, loss=0.01]
Epoch: 80:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 80: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0.01]
Epoch: 81:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 81: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.01]
Epoch: 82:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 82: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0.01]
Epoch: 83:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 83: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0.01]
Epoch: 84:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 84: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0.01]
Epoch: 85:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 85: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0.01]
Epoch: 86:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 86: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0.01]
Epoch: 87:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 87: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 88:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 88: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 89:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 89: 100% 213/213 [00:10<00:00, 20.15it/s, loss=0.01]
Epoch: 90:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 90: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0.01]
Epoch: 91:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 91: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 92:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 92: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0.01]
Epoch: 93:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 93: 100% 213/213 [00:10<00:00, 20.13it/s, loss=0]
Epoch: 94:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 94: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0.01]
Epoch: 95:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 95: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 96:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 96: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 97:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 97: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 98:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 98: 100% 213/213 [00:10<00:00, 20.12it/s, loss=0]
Epoch: 99:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 99: 100% 213/213 [00:10<00:00, 20.19it/s, loss=0]
Epoch: 100:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 100: 100% 213/213 [00:10<00:00, 20.15it/s, loss=0]
Epoch: 101:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 101: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.01]
Epoch: 102:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 102: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 103:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 103: 100% 213/213 [00:10<00:00, 20.15it/s, loss=0]
Epoch: 104:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 104: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 105:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 105: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 106:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 106: 100% 213/213 [00:10<00:00, 20.16it/s, loss=0]
Epoch: 107:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 107: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 108:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 108: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0]
Epoch: 109:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 109: 100% 213/213 [00:10<00:00, 20.17it/s, loss=0]
Epoch: 110:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 110: 100% 213/213 [00:10<00:00, 20.12it/s, loss=0]
Epoch: 111:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 111: 100% 213/213 [00:10<00:00, 20.08it/s, loss=0]
Epoch: 112:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 112: 100% 213/213 [00:10<00:00, 20.01it/s, loss=0]
Epoch: 113:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 113: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 114:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 114: 100% 213/213 [00:10<00:00, 20.12it/s, loss=0]
Epoch: 115:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 115: 100% 213/213 [00:10<00:00, 20.17it/s, loss=0]
Epoch: 116:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 116: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 117:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 117: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 118:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 118: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 119:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 119: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0.01]
Epoch: 120:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 120: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 121:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 121: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 122:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 122: 100% 213/213 [00:10<00:00, 20.13it/s, loss=0]
Epoch: 123:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 123: 100% 213/213 [00:10<00:00, 20.11it/s, loss=0]
Epoch: 124:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 124: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0]
Epoch: 125:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 125: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 126:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 126: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 127:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 127: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 128:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 128: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0.01]
Epoch: 129:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 129: 100% 213/213 [00:10<00:00, 20.04it/s, loss=0]
Epoch: 130:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 130: 100% 213/213 [00:10<00:00, 20.00it/s, loss=0]
Epoch: 131:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 131: 100% 213/213 [00:10<00:00, 20.08it/s, loss=0]
Epoch: 132:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 132: 100% 213/213 [00:10<00:00, 20.07it/s, loss=0]
Epoch: 133:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 133: 100% 213/213 [00:10<00:00, 20.09it/s, loss=0]
Epoch: 134:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 134: 100% 213/213 [00:10<00:00, 20.10it/s, loss=0]
Epoch: 135:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 135: 100% 213/213 [00:10<00:00, 20.16it/s, loss=0]
Epoch: 136:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 136: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 137:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 137: 100% 213/213 [00:10<00:00, 20.08it/s, loss=0]
Epoch: 138:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 138: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 139:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 139: 100% 213/213 [00:10<00:00, 20.16it/s, loss=0.01]
Epoch: 140:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 140: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 141:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 141: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 142:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 142: 100% 213/213 [00:10<00:00, 19.85it/s, loss=0]
Epoch: 143:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 143: 100% 213/213 [00:10<00:00, 19.82it/s, loss=0]
Epoch: 144:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 144: 100% 213/213 [00:10<00:00, 19.90it/s, loss=0]
Epoch: 145:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 145: 100% 213/213 [00:10<00:00, 19.87it/s, loss=0]
Epoch: 146:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 146: 100% 213/213 [00:10<00:00, 19.93it/s, loss=0]
Epoch: 147:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 147: 100% 213/213 [00:10<00:00, 20.05it/s, loss=0]
Epoch: 148:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 148: 100% 213/213 [00:10<00:00, 20.17it/s, loss=0]
Epoch: 149:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 149: 100% 213/213 [00:10<00:00, 20.10it/s, loss=0]
Epoch: 150:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 150: 100% 213/213 [00:10<00:00, 20.13it/s, loss=0]
Epoch: 151:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 151: 100% 213/213 [00:10<00:00, 20.17it/s, loss=0]
Epoch: 152:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 152: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 153:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 153: 100% 213/213 [00:10<00:00, 19.93it/s, loss=0]
Epoch: 154:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 154: 100% 213/213 [00:10<00:00, 20.11it/s, loss=0]
Epoch: 155:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 155: 100% 213/213 [00:10<00:00, 20.05it/s, loss=0]
Epoch: 156:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 156: 100% 213/213 [00:10<00:00, 19.97it/s, loss=0]
Epoch: 157:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 157: 100% 213/213 [00:10<00:00, 20.06it/s, loss=0]
Epoch: 158:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 158: 100% 213/213 [00:10<00:00, 20.13it/s, loss=0]
Epoch: 159:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 159: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0.01]
Epoch: 160:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 160: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0]
Epoch: 161:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 161: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 162:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 162: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 163:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 163: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 164:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 164: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0]
Epoch: 165:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 165: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 166:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 166: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 167:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 167: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 168:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 168: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0]
Epoch: 169:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 169: 100% 213/213 [00:10<00:00, 20.15it/s, loss=0]
Epoch: 170:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 170: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 171:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 171: 100% 213/213 [00:10<00:00, 20.16it/s, loss=0]
Epoch: 172:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 172: 100% 213/213 [00:10<00:00, 20.15it/s, loss=0]
Epoch: 173:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 173: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0]
Epoch: 174:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 174: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 175:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 175: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 176:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 176: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 177:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 177: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 178:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 178: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 179:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 179: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 180:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 180: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 181:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 181: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 182:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 182: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0]
Epoch: 183:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 183: 100% 213/213 [00:10<00:00, 19.99it/s, loss=0]
Epoch: 184:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 184: 100% 213/213 [00:10<00:00, 20.12it/s, loss=0]
Epoch: 185:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 185: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 186:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 186: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0]
Epoch: 187:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 187: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 188:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 188: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 189:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 189: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0]
Epoch: 190:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 190: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0]
Epoch: 191:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 191: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 192:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 192: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 193:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 193: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 194:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 194: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0]
Epoch: 195:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 195: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 196:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 196: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 197:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 197: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0]
Epoch: 198:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 198: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 199:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 199: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 200:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 200: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 201:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 201: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 202:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 202: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 203:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 203: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 204:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 204: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 205:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 205: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 206:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 206: 100% 213/213 [00:10<00:00, 20.17it/s, loss=0]
Epoch: 207:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 207: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 208:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 208: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 209:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 209: 100% 213/213 [00:10<00:00, 20.15it/s, loss=0]
Epoch: 210:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 210: 100% 213/213 [00:10<00:00, 20.02it/s, loss=0]
Epoch: 211:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 211: 100% 213/213 [00:10<00:00, 20.17it/s, loss=0]
Epoch: 212:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 212: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 213:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 213: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 214:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 214: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 215:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 215: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 216:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 216: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 217:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 217: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 218:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 218: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 219:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 219: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 220:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 220: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 221:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 221: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 222:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 222: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 223:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 223: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 224:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 224: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 225:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 225: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 226:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 226: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 227:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 227: 100% 213/213 [00:10<00:00, 20.19it/s, loss=0]
Epoch: 228:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 228: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 229:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 229: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 230:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 230: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 231:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 231: 100% 213/213 [00:10<00:00, 20.39it/s, loss=0]
Epoch: 232:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 232: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 233:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 233: 100% 213/213 [00:10<00:00, 20.16it/s, loss=0]
Epoch: 234:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 234: 100% 213/213 [00:10<00:00, 20.17it/s, loss=0]
Epoch: 235:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 235: 100% 213/213 [00:10<00:00, 20.17it/s, loss=0]
Epoch: 236:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 236: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 237:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 237: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 238:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 238: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 239:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 239: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 240:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 240: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0]
Epoch: 241:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 241: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 242:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 242: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 243:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 243: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 244:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 244: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 245:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 245: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 246:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 246: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 247:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 247: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 248:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 248: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 249:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 249: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 250:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 250: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 251:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 251: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 252:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 252: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 253:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 253: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 254:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 254: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 255:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 255: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 256:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 256: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 257:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 257: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 258:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 258: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0]
Epoch: 259:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 259: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 260:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 260: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 261:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 261: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 262:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 262: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 263:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 263: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 264:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 264: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0]
Epoch: 265:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 265: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 266:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 266: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 267:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 267: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 268:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 268: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 269:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 269: 100% 213/213 [00:10<00:00, 20.18it/s, loss=0]
Epoch: 270:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 270: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 271:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 271: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 272:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 272: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 273:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 273: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 274:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 274: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 275:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 275: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 276:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 276: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 277:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 277: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 278:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 278: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 279:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 279: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 280:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 280: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 281:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 281: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 282:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 282: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 283:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 283: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 284:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 284: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0]
Epoch: 285:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 285: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 286:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 286: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 287:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 287: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 288:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 288: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 289:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 289: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 290:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 290: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 291:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 291: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 292:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 292: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 293:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 293: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 294:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 294: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 295:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 295: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 296:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 296: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 297:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 297: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 298:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 298: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 299:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 299: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 100% 300/300 [52:57<00:00, 10.59s/it]
Iteration: 100% 24/24 [00:10<00:00,  2.35it/s, acc=1]
Iteration: 100% 165/165 [17:39<00:00,  6.42s/it, acc=0.901]
obj_pp_to_subj_pp: 31.1
cp_recursion: 53.7
pp_recursion: 43.1
subj_to_obj_proper: 88.0
prim_to_obj_proper: 92.4
prim_to_subj_proper: 99.9
LEX: 98.99333333333334
OVERALL: 90.14761904761905
In [ ]:
# extract and move the error logs
!echo "actual	expected	input" > wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!cat wu_et_al_2023_recogs_baseline_for_error_analysis.log | grep obj_pp_to_subj_pp | sed -E 's/INFO:root:Mistake \(category obj_pp_to_subj_pp\)://g' | sed -E 's/, Expected: /	/g' | sed -E 's/, input: /	/g' | sed -E "s/'//g" >> wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!head -n 10 wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!mv wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_77.tsv
actual	expected	input
 * baby ( 1 ) ; tray ( 4 ) ; * house ( 7 ) ; nmod . on ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND scream ( 8 ) AND agent ( 8 , 7 )	* baby ( 1 ) ; tray ( 4 ) ; * house ( 7 ) ; nmod . on ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND scream ( 8 ) AND agent ( 8 , 1 )	The baby on a tray in the house screamed .
 * spokesman ( 1 ) ; * house ( 4 ) ; Emma ( 6 ) ; * rose ( 8 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 6 ) AND recipient ( 5 , 6 )	* spokesman ( 1 ) ; * house ( 4 ) ; Emma ( 6 ) ; * rose ( 8 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )	The spokesman in the house served Emma the rose .
 donkey ( 1 ) ; * room ( 4 ) ; Ella ( 6 ) ; donut ( 8 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 6 ) AND recipient ( 5 , 8 )	donkey ( 1 ) ; * room ( 4 ) ; Ella ( 6 ) ; donut ( 8 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )	A donkey in the room sold Ella a donut .
 cat ( 1 ) ; * house ( 4 ) ; * cake ( 8 ) ; * boy ( 11 ) ; table ( 14 ) ; nmod . in ( 1 , 4 ) AND offer ( 6 ) AND recipient ( 6 , 1 ) AND theme ( 6 , 8 ) AND agent ( 6 , 11 ) AND nmod . beside ( 8 , 14 )	cat ( 1 ) ; * house ( 4 ) ; * cake ( 8 ) ; * boy ( 11 ) ; table ( 14 ) ; nmod . in ( 1 , 4 ) AND offer ( 6 ) AND recipient ( 6 , 1 ) AND theme ( 6 , 8 ) AND agent ( 6 , 11 ) AND nmod . beside ( 11 , 14 )	A cat in the house was offered the cake by the boy beside a table .
 * dog ( 1 ) ; bakery ( 4 ) ; * bag ( 7 ) ; nmod . in ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND sneeze ( 8 ) AND theme ( 8 , 1 ) AND agent ( 8 , 7 )	* dog ( 1 ) ; bakery ( 4 ) ; * bag ( 7 ) ; nmod . in ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND sneeze ( 8 ) AND agent ( 8 , 1 )	The dog in a bakery in the bag sneezed .
 girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND theme ( 8 , 1 ) AND agent ( 8 , 10 )	girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )	A girl on the stool on the table drew a frog .
 donut ( 1 ) ; table ( 4 ) ; grow ( 5 ) AND theme ( 5 , 1 ) AND agent ( 1 , 4 )	donut ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND grow ( 5 ) AND theme ( 5 , 1 )	A donut on a table grew .
 * cake ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND paint ( 6 ) AND theme ( 6 , 1 ) AND agent ( 6 , 4 )	* cake ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND paint ( 6 ) AND theme ( 6 , 1 )	The cake in the house was painted .
 * sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , recipient ( 5 ) AND nmod . on ( 7 , 10 )	* sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )	The sailor in a house lended a biscuit on a table to a goose .
In [ ]:
!mv wu_et_al_2023_recogs_baseline_for_error_analysis.log wu_et_al_2023_recogs_baseline_for_error_analysis_seed_77.log

Pause here, save results, resume in a few hours. Colab requires me to actively be engaged with this in order to keep it going, cannot run the A100 in the background.

On resume, repeat setup steps

In [ ]:
!pip install transformers==v4.45.2 # there is a breaking change for Wu et al 2023 in upstream huggingface Transformers after this version (see https://github.com/frankaging/ReCOGS/issues/1 )
Collecting transformers==v4.45.2
  Downloading transformers-4.45.2-py3-none-any.whl.metadata (44 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/44.4 kB ? eta -:--:--
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 44.4/44.4 kB 2.0 MB/s eta 0:00:00
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from transformers==v4.45.2) (3.16.1)
Requirement already satisfied: huggingface-hub<1.0,>=0.23.2 in /usr/local/lib/python3.10/dist-packages (from transformers==v4.45.2) (0.26.5)
Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from transformers==v4.45.2) (1.26.4)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from transformers==v4.45.2) (24.2)
Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/dist-packages (from transformers==v4.45.2) (6.0.2)
Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.10/dist-packages (from transformers==v4.45.2) (2024.9.11)
Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from transformers==v4.45.2) (2.32.3)
Requirement already satisfied: safetensors>=0.4.1 in /usr/local/lib/python3.10/dist-packages (from transformers==v4.45.2) (0.4.5)
Requirement already satisfied: tokenizers<0.21,>=0.20 in /usr/local/lib/python3.10/dist-packages (from transformers==v4.45.2) (0.20.3)
Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.10/dist-packages (from transformers==v4.45.2) (4.66.6)
Requirement already satisfied: fsspec>=2023.5.0 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.23.2->transformers==v4.45.2) (2024.10.0)
Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.23.2->transformers==v4.45.2) (4.12.2)
Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==v4.45.2) (3.4.0)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==v4.45.2) (3.10)
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==v4.45.2) (2.2.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==v4.45.2) (2024.8.30)
Downloading transformers-4.45.2-py3-none-any.whl (9.9 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.9/9.9 MB 71.4 MB/s eta 0:00:00
Installing collected packages: transformers
  Attempting uninstall: transformers
    Found existing installation: transformers 4.46.3
    Uninstalling transformers-4.46.3:
      Successfully uninstalled transformers-4.46.3
Successfully installed transformers-4.45.2
In [ ]:
%cd /content/
!rm -rf ReCOGS
!git clone https://github.com/frankaging/ReCOGS.git
%cd ReCOGS
/content
Cloning into 'ReCOGS'...
remote: Enumerating objects: 436, done.
remote: Counting objects: 100% (124/124), done.
remote: Compressing objects: 100% (51/51), done.
remote: Total 436 (delta 96), reused 92 (delta 73), pack-reused 312 (from 1)
Receiving objects: 100% (436/436), 84.71 MiB | 36.25 MiB/s, done.
Resolving deltas: 100% (303/303), done.
Updating files: 100% (137/137), done.
/content/ReCOGS

Modify their script to log the errors for analysis, in run_cogs:

logging.basicConfig(filename="wu_et_al_2023_recogs_baseline_for_error_analysis.log")

and in do_gen condition, at https://github.com/frankaging/ReCOGS/blob/1b6eca8ff4dca5fd2fb284a7d470998af5083beb/run_cogs.py#L384 in the not eq condition (may add in all condition since saving the expected and actual columns then can confirm all examples get logged) add logging.info(f"Mistake (category {cat}): '{decoded_preds[i]}', Expected: '{decoded_labels[i]}', input: {input_labels[i]}")

In [ ]:
# baseline Wu et al 2023 model and baseline data
!python run_cogs.py --model_name ende_transformer --use_iiem --gpu 1 --train_batch_size 128 --eval_batch_size 128 --lr 0.0001 --data_path ./recogs_positional_index --output_dir ./results_recogs_positional_index_control --lfs cogs --do_train --do_test --do_gen --max_seq_len 512 --output_json --epochs 300 --seeds "88"
EncoderDecoderModel has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
  - If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
  - If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
  - If you are not the owner of the model architecture class, please contact the model code owner to update it.
Epoch: 0:   0% 0/213 [00:00<?, ?it/s]We strongly recommend passing in an `attention_mask` since your input_ids may be padded. See https://huggingface.co/docs/transformers/troubleshooting#incorrect-output-when-padding-tokens-arent-masked.
/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 0: 100% 213/213 [00:12<00:00, 17.66it/s, loss=5.98]
Epoch: 1:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 1: 100% 213/213 [00:10<00:00, 20.55it/s, loss=4.6]
Epoch: 2:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 2: 100% 213/213 [00:10<00:00, 20.57it/s, loss=3.51]
Epoch: 3:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 3: 100% 213/213 [00:10<00:00, 20.56it/s, loss=2.38]
Epoch: 4:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 4: 100% 213/213 [00:10<00:00, 20.57it/s, loss=1.81]
Epoch: 5:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 5: 100% 213/213 [00:10<00:00, 20.47it/s, loss=1.48]
Epoch: 6:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 6: 100% 213/213 [00:10<00:00, 20.55it/s, loss=1.2]
Epoch: 7:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 7: 100% 213/213 [00:10<00:00, 20.56it/s, loss=0.99]
Epoch: 8:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 8: 100% 213/213 [00:10<00:00, 20.57it/s, loss=0.86]
Epoch: 9:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 9: 100% 213/213 [00:10<00:00, 20.56it/s, loss=0.77]
Epoch: 10:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 10: 100% 213/213 [00:10<00:00, 20.55it/s, loss=0.68]
Epoch: 11:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 11: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.58]
Epoch: 12:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 12: 100% 213/213 [00:10<00:00, 20.56it/s, loss=0.51]
Epoch: 13:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 13: 100% 213/213 [00:10<00:00, 20.58it/s, loss=0.43]
Epoch: 14:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 14: 100% 213/213 [00:10<00:00, 20.57it/s, loss=0.36]
Epoch: 15:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 15: 100% 213/213 [00:10<00:00, 20.55it/s, loss=0.3]
Epoch: 16:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 16: 100% 213/213 [00:10<00:00, 20.54it/s, loss=0.25]
Epoch: 17:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 17: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0.19]
Epoch: 18:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 18: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.16]
Epoch: 19:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 19: 100% 213/213 [00:10<00:00, 20.54it/s, loss=0.12]
Epoch: 20:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 20: 100% 213/213 [00:10<00:00, 20.52it/s, loss=0.1]
Epoch: 21:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 21: 100% 213/213 [00:10<00:00, 20.53it/s, loss=0.08]
Epoch: 22:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 22: 100% 213/213 [00:10<00:00, 20.54it/s, loss=0.07]
Epoch: 23:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 23: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.06]
Epoch: 24:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 24: 100% 213/213 [00:10<00:00, 20.54it/s, loss=0.04]
Epoch: 25:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 25: 100% 213/213 [00:10<00:00, 20.53it/s, loss=0.04]
Epoch: 26:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 26: 100% 213/213 [00:10<00:00, 20.56it/s, loss=0.03]
Epoch: 27:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 27: 100% 213/213 [00:10<00:00, 20.55it/s, loss=0.03]
Epoch: 28:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 28: 100% 213/213 [00:10<00:00, 20.53it/s, loss=0.03]
Epoch: 29:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 29: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0.02]
Epoch: 30:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 30: 100% 213/213 [00:10<00:00, 20.56it/s, loss=0.02]
Epoch: 31:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 31: 100% 213/213 [00:10<00:00, 20.54it/s, loss=0.01]
Epoch: 32:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 32: 100% 213/213 [00:10<00:00, 20.57it/s, loss=0.01]
Epoch: 33:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 33: 100% 213/213 [00:10<00:00, 20.54it/s, loss=0.01]
Epoch: 34:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 34: 100% 213/213 [00:10<00:00, 20.56it/s, loss=0.01]
Epoch: 35:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 35: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.01]
Epoch: 36:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 36: 100% 213/213 [00:10<00:00, 20.55it/s, loss=0.01]
Epoch: 37:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 37: 100% 213/213 [00:10<00:00, 20.55it/s, loss=0.01]
Epoch: 38:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 38: 100% 213/213 [00:10<00:00, 20.55it/s, loss=0.01]
Epoch: 39:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 39: 100% 213/213 [00:10<00:00, 20.57it/s, loss=0.01]
Epoch: 40:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 40: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 41:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 41: 100% 213/213 [00:10<00:00, 20.56it/s, loss=0]
Epoch: 42:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 42: 100% 213/213 [00:10<00:00, 20.57it/s, loss=0.01]
Epoch: 43:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 43: 100% 213/213 [00:10<00:00, 20.55it/s, loss=0.01]
Epoch: 44:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 44: 100% 213/213 [00:10<00:00, 20.54it/s, loss=0]
Epoch: 45:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 45: 100% 213/213 [00:10<00:00, 20.55it/s, loss=0]
Epoch: 46:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 46: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0.01]
Epoch: 47:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 47: 100% 213/213 [00:10<00:00, 20.53it/s, loss=0]
Epoch: 48:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 48: 100% 213/213 [00:10<00:00, 20.55it/s, loss=0.01]
Epoch: 49:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 49: 100% 213/213 [00:10<00:00, 20.54it/s, loss=0]
Epoch: 50:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 50: 100% 213/213 [00:10<00:00, 20.55it/s, loss=0]
Epoch: 51:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 51: 100% 213/213 [00:10<00:00, 20.57it/s, loss=0]
Epoch: 52:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 52: 100% 213/213 [00:10<00:00, 20.52it/s, loss=0]
Epoch: 53:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 53: 100% 213/213 [00:10<00:00, 20.57it/s, loss=0]
Epoch: 54:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 54: 100% 213/213 [00:10<00:00, 20.53it/s, loss=0]
Epoch: 55:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 55: 100% 213/213 [00:10<00:00, 20.54it/s, loss=0]
Epoch: 56:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 56: 100% 213/213 [00:10<00:00, 20.58it/s, loss=0]
Epoch: 57:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 57: 100% 213/213 [00:10<00:00, 20.56it/s, loss=0]
Epoch: 58:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 58: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 59:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 59: 100% 213/213 [00:10<00:00, 20.57it/s, loss=0]
Epoch: 60:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 60: 100% 213/213 [00:10<00:00, 20.55it/s, loss=0]
Epoch: 61:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 61: 100% 213/213 [00:10<00:00, 20.57it/s, loss=0]
Epoch: 62:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 62: 100% 213/213 [00:10<00:00, 20.57it/s, loss=0]
Epoch: 63:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 63: 100% 213/213 [00:10<00:00, 20.55it/s, loss=0]
Epoch: 64:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 64: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 65:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 65: 100% 213/213 [00:10<00:00, 20.56it/s, loss=0]
Epoch: 66:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 66: 100% 213/213 [00:10<00:00, 20.57it/s, loss=0]
Epoch: 67:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 67: 100% 213/213 [00:10<00:00, 20.55it/s, loss=0]
Epoch: 68:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 68: 100% 213/213 [00:10<00:00, 20.55it/s, loss=0]
Epoch: 69:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 69: 100% 213/213 [00:10<00:00, 20.55it/s, loss=0]
Epoch: 70:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 70: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 71:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 71: 100% 213/213 [00:10<00:00, 20.56it/s, loss=0]
Epoch: 72:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 72: 100% 213/213 [00:10<00:00, 20.52it/s, loss=0]
Epoch: 73:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 73: 100% 213/213 [00:10<00:00, 20.56it/s, loss=0]
Epoch: 74:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 74: 100% 213/213 [00:10<00:00, 20.55it/s, loss=0]
Epoch: 75:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 75: 100% 213/213 [00:10<00:00, 20.55it/s, loss=0]
Epoch: 76:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 76: 100% 213/213 [00:10<00:00, 20.52it/s, loss=0]
Epoch: 77:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 77: 100% 213/213 [00:10<00:00, 20.53it/s, loss=0]
Epoch: 78:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 78: 100% 213/213 [00:10<00:00, 20.56it/s, loss=0]
Epoch: 79:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 79: 100% 213/213 [00:10<00:00, 20.58it/s, loss=0]
Epoch: 80:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 80: 100% 213/213 [00:10<00:00, 20.56it/s, loss=0]
Epoch: 81:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 81: 100% 213/213 [00:10<00:00, 20.57it/s, loss=0]
Epoch: 82:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 82: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 83:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 83: 100% 213/213 [00:10<00:00, 20.55it/s, loss=0]
Epoch: 84:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 84: 100% 213/213 [00:10<00:00, 20.57it/s, loss=0]
Epoch: 85:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 85: 100% 213/213 [00:10<00:00, 20.56it/s, loss=0]
Epoch: 86:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 86: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 87:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 87: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 88:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 88: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 89:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 89: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 90:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 90: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 91:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 91: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 92:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 92: 100% 213/213 [00:10<00:00, 20.56it/s, loss=0]
Epoch: 93:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 93: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 94:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 94: 100% 213/213 [00:10<00:00, 20.53it/s, loss=0]
Epoch: 95:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 95: 100% 213/213 [00:10<00:00, 20.52it/s, loss=0]
Epoch: 96:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 96: 100% 213/213 [00:10<00:00, 20.55it/s, loss=0]
Epoch: 97:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 97: 100% 213/213 [00:10<00:00, 20.54it/s, loss=0]
Epoch: 98:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 98: 100% 213/213 [00:10<00:00, 20.54it/s, loss=0]
Epoch: 99:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 99: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 100:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 100: 100% 213/213 [00:10<00:00, 20.52it/s, loss=0]
Epoch: 101:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 101: 100% 213/213 [00:10<00:00, 20.53it/s, loss=0]
Epoch: 102:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 102: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 103:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 103: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 104:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 104: 100% 213/213 [00:10<00:00, 20.52it/s, loss=0]
Epoch: 105:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 105: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 106:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 106: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 107:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 107: 100% 213/213 [00:10<00:00, 20.53it/s, loss=0]
Epoch: 108:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 108: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 109:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 109: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 110:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 110: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 111:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 111: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 112:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 112: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 113:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 113: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 114:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 114: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 115:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 115: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 116:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 116: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 117:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 117: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 118:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 118: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 119:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 119: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 120:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 120: 100% 213/213 [00:10<00:00, 20.19it/s, loss=0]
Epoch: 121:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 121: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 122:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 122: 100% 213/213 [00:10<00:00, 20.18it/s, loss=0]
Epoch: 123:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 123: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0]
Epoch: 124:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 124: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 125:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 125: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 126:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 126: 100% 213/213 [00:10<00:00, 20.11it/s, loss=0]
Epoch: 127:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 127: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 128:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 128: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 129:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 129: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 130:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 130: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 131:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 131: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 132:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 132: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 133:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 133: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 134:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 134: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 135:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 135: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 136:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 136: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 137:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 137: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 138:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 138: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0]
Epoch: 139:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 139: 100% 213/213 [00:10<00:00, 20.17it/s, loss=0]
Epoch: 140:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 140: 100% 213/213 [00:10<00:00, 20.18it/s, loss=0]
Epoch: 141:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 141: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0]
Epoch: 142:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 142: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0]
Epoch: 143:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 143: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 144:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 144: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 145:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 145: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 146:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 146: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 147:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 147: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 148:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 148: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 149:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 149: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 150:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 150: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 151:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 151: 100% 213/213 [00:10<00:00, 20.52it/s, loss=0]
Epoch: 152:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 152: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 153:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 153: 100% 213/213 [00:10<00:00, 20.39it/s, loss=0]
Epoch: 154:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 154: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 155:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 155: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 156:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 156: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 157:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 157: 100% 213/213 [00:10<00:00, 20.13it/s, loss=0]
Epoch: 158:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 158: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 159:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 159: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 160:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 160: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 161:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 161: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 162:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 162: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 163:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 163: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 164:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 164: 100% 213/213 [00:10<00:00, 20.09it/s, loss=0]
Epoch: 165:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 165: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 166:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 166: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 167:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 167: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 168:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 168: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 169:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 169: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0]
Epoch: 170:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 170: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 171:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 171: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 172:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 172: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 173:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 173: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 174:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 174: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 175:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 175: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 176:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 176: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 177:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 177: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 178:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 178: 100% 213/213 [00:10<00:00, 20.10it/s, loss=0]
Epoch: 179:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 179: 100% 213/213 [00:10<00:00, 20.12it/s, loss=0]
Epoch: 180:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 180: 100% 213/213 [00:10<00:00, 20.09it/s, loss=0]
Epoch: 181:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 181: 100% 213/213 [00:10<00:00, 19.93it/s, loss=0]
Epoch: 182:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 182: 100% 213/213 [00:10<00:00, 20.04it/s, loss=0]
Epoch: 183:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 183: 100% 213/213 [00:10<00:00, 19.99it/s, loss=0]
Epoch: 184:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 184: 100% 213/213 [00:10<00:00, 20.05it/s, loss=0]
Epoch: 185:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 185: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 186:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 186: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 187:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 187: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 188:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 188: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 189:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 189: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 190:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 190: 100% 213/213 [00:10<00:00, 20.54it/s, loss=0]
Epoch: 191:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 191: 100% 213/213 [00:10<00:00, 20.52it/s, loss=0]
Epoch: 192:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 192: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 193:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 193: 100% 213/213 [00:10<00:00, 20.39it/s, loss=0]
Epoch: 194:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 194: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 195:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 195: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 196:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 196: 100% 213/213 [00:10<00:00, 20.16it/s, loss=0]
Epoch: 197:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 197: 100% 213/213 [00:10<00:00, 20.18it/s, loss=0]
Epoch: 198:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 198: 100% 213/213 [00:10<00:00, 20.12it/s, loss=0]
Epoch: 199:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 199: 100% 213/213 [00:10<00:00, 20.14it/s, loss=0]
Epoch: 200:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 200: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 201:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 201: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 202:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 202: 100% 213/213 [00:10<00:00, 20.19it/s, loss=0]
Epoch: 203:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 203: 100% 213/213 [00:10<00:00, 20.09it/s, loss=0]
Epoch: 204:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 204: 100% 213/213 [00:10<00:00, 20.09it/s, loss=0]
Epoch: 205:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 205: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 206:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 206: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 207:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 207: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0]
Epoch: 208:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 208: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0]
Epoch: 209:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 209: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 210:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 210: 100% 213/213 [00:10<00:00, 20.18it/s, loss=0]
Epoch: 211:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 211: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0]
Epoch: 212:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 212: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 213:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 213: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 214:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 214: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 215:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 215: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 216:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 216: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 217:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 217: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 218:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 218: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 219:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 219: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 220:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 220: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 221:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 221: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 222:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 222: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 223:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 223: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 224:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 224: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 225:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 225: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 226:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 226: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 227:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 227: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 228:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 228: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 229:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 229: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 230:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 230: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0]
Epoch: 231:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 231: 100% 213/213 [00:10<00:00, 19.97it/s, loss=0]
Epoch: 232:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 232: 100% 213/213 [00:10<00:00, 19.91it/s, loss=0]
Epoch: 233:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 233: 100% 213/213 [00:10<00:00, 19.90it/s, loss=0]
Epoch: 234:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 234: 100% 213/213 [00:10<00:00, 19.94it/s, loss=0]
Epoch: 235:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 235: 100% 213/213 [00:10<00:00, 19.88it/s, loss=0]
Epoch: 236:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 236: 100% 213/213 [00:10<00:00, 19.82it/s, loss=0]
Epoch: 237:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 237: 100% 213/213 [00:10<00:00, 20.02it/s, loss=0]
Epoch: 238:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 238: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 239:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 239: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 240:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 240: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 241:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 241: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 242:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 242: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 243:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 243: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 244:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 244: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 245:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 245: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 246:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 246: 100% 213/213 [00:10<00:00, 20.55it/s, loss=0]
Epoch: 247:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 247: 100% 213/213 [00:10<00:00, 20.52it/s, loss=0]
Epoch: 248:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 248: 100% 213/213 [00:10<00:00, 20.53it/s, loss=0]
Epoch: 249:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 249: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 250:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 250: 100% 213/213 [00:10<00:00, 20.57it/s, loss=0]
Epoch: 251:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 251: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 252:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 252: 100% 213/213 [00:10<00:00, 20.56it/s, loss=0]
Epoch: 253:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 253: 100% 213/213 [00:10<00:00, 20.54it/s, loss=0]
Epoch: 254:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 254: 100% 213/213 [00:10<00:00, 20.56it/s, loss=0]
Epoch: 255:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 255: 100% 213/213 [00:10<00:00, 20.52it/s, loss=0]
Epoch: 256:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 256: 100% 213/213 [00:10<00:00, 20.57it/s, loss=0]
Epoch: 257:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 257: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 258:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 258: 100% 213/213 [00:10<00:00, 20.54it/s, loss=0]
Epoch: 259:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 259: 100% 213/213 [00:10<00:00, 20.52it/s, loss=0]
Epoch: 260:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 260: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 261:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 261: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 262:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 262: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 263:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 263: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 264:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 264: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 265:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 265: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 266:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 266: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 267:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 267: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 268:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 268: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0]
Epoch: 269:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 269: 100% 213/213 [00:10<00:00, 20.18it/s, loss=0]
Epoch: 270:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 270: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0]
Epoch: 271:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 271: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 272:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 272: 100% 213/213 [00:10<00:00, 20.09it/s, loss=0]
Epoch: 273:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 273: 100% 213/213 [00:10<00:00, 19.94it/s, loss=0]
Epoch: 274:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 274: 100% 213/213 [00:10<00:00, 20.08it/s, loss=0]
Epoch: 275:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 275: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 276:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 276: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0]
Epoch: 277:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 277: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 278:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 278: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 279:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 279: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 280:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 280: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 281:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 281: 100% 213/213 [00:10<00:00, 20.55it/s, loss=0]
Epoch: 282:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 282: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 283:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 283: 100% 213/213 [00:10<00:00, 20.53it/s, loss=0]
Epoch: 284:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 284: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 285:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 285: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 286:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 286: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 287:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 287:  47% 100/213 [00:05<00:05, 20.23it/s, loss=0]

have to keep session interactive or it stops saving on colab with A100 so it did not save end of log above, can print it from JSON file

In [ ]:
!cat /content/ReCOGS/results_recogs_positional_index_control/Dec-18-2024_ende_transformer_recogs_positional_index_cogs_seed_88.json
{
    "88_recogs_positional_index_cogs": {
        "obj_pp_to_subj_pp": 13.6,
        "cp_recursion": 50.0,
        "pp_recursion": 21.4,
        "subj_to_obj_proper": 90.6,
        "prim_to_obj_proper": 90.1,
        "prim_to_subj_proper": 100.0,
        "lex_acc": 94.69333333333333,
        "overall_acc": 85.05238095238096,
        "test_acc": 1.0
    }
}
In [ ]:
# extract and move the error logs
!echo "actual	expected	input" > wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!cat wu_et_al_2023_recogs_baseline_for_error_analysis.log | grep obj_pp_to_subj_pp | sed -E 's/INFO:root:Mistake \(category obj_pp_to_subj_pp\)://g' | sed -E 's/, Expected: /	/g' | sed -E 's/, input: /	/g' | sed -E "s/'//g" >> wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!head -n 10 wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!mv wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_88.tsv
In [ ]:
!head -n 10 wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_88.tsv
actual	expected	input
 * baby ( 1 ) ; tray ( 4 ) ; * house ( 7 ) ; nmod . on ( 1 , 4 ) AND scream ( 8 ) AND theme ( 8 , 1 ) AND agent ( 8 , 7 )	* baby ( 1 ) ; tray ( 4 ) ; * house ( 7 ) ; nmod . on ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND scream ( 8 ) AND agent ( 8 , 1 )	The baby on a tray in the house screamed .
 * spokesman ( 1 ) ; * house ( 4 ) ; Emma ( 6 ) ; * rose ( 8 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 6 ) AND recipient ( 5 , 8 )	* spokesman ( 1 ) ; * house ( 4 ) ; Emma ( 6 ) ; * rose ( 8 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )	The spokesman in the house served Emma the rose .
 donkey ( 1 ) ; * room ( 4 ) ; Ella ( 6 ) ; donut ( 8 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 6 ) AND recipient ( 5 , 8 )	donkey ( 1 ) ; * room ( 4 ) ; Ella ( 6 ) ; donut ( 8 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )	A donkey in the room sold Ella a donut .
 cat ( 1 ) ; * house ( 4 ) ; * cake ( 8 ) ; * boy ( 11 ) ; table ( 14 ) ; nmod . in ( 1 , 4 ) AND offer ( 6 ) AND recipient ( 6 , 4 ) AND theme ( 6 , 8 ) AND agent ( 6 , 11 ) AND nmod . beside ( 11 , 14 )	cat ( 1 ) ; * house ( 4 ) ; * cake ( 8 ) ; * boy ( 11 ) ; table ( 14 ) ; nmod . in ( 1 , 4 ) AND offer ( 6 ) AND recipient ( 6 , 1 ) AND theme ( 6 , 8 ) AND agent ( 6 , 11 ) AND nmod . beside ( 11 , 14 )	A cat in the house was offered the cake by the boy beside a table .
 * dog ( 1 ) ; bakery ( 4 ) ; * bag ( 7 ) ; nmod . in ( 1 , 4 ) AND sneeze ( 8 ) AND theme ( 8 , 1 ) AND agent ( 8 , 7 )	* dog ( 1 ) ; bakery ( 4 ) ; * bag ( 7 ) ; nmod . in ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND sneeze ( 8 ) AND agent ( 8 , 1 )	The dog in a bakery in the bag sneezed .
 girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND draw ( 8 ) AND theme ( 8 , 1 ) AND agent ( 8 , 7 ) AND nmod . on ( 4 , 10 )	girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )	A girl on the stool on the table drew a frog .
 donut ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND grow ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 4 )	donut ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND grow ( 5 ) AND theme ( 5 , 1 )	A donut on a table grew .
 * cake ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND paint ( 6 ) AND theme ( 6 , 1 ) AND agent ( 6 , 4 )	* cake ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND paint ( 6 ) AND theme ( 6 , 1 )	The cake in the house was painted .
 * sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 10 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )	* sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )	The sailor in a house lended a biscuit on a table to a goose .
In [ ]:
!mv wu_et_al_2023_recogs_baseline_for_error_analysis.log wu_et_al_2023_recogs_baseline_for_error_analysis_seed_88.log
In [ ]:
# baseline Wu et al 2023 model and baseline data
!python run_cogs.py --model_name ende_transformer --use_iiem --gpu 1 --train_batch_size 128 --eval_batch_size 128 --lr 0.0001 --data_path ./recogs_positional_index --output_dir ./results_recogs_positional_index_control --lfs cogs --do_train --do_test --do_gen --max_seq_len 512 --output_json --epochs 300 --seeds "99"
EncoderDecoderModel has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
  - If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
  - If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
  - If you are not the owner of the model architecture class, please contact the model code owner to update it.
Epoch: 0:   0% 0/213 [00:00<?, ?it/s]We strongly recommend passing in an `attention_mask` since your input_ids may be padded. See https://huggingface.co/docs/transformers/troubleshooting#incorrect-output-when-padding-tokens-arent-masked.
/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 0: 100% 213/213 [00:11<00:00, 18.73it/s, loss=5.87]
Epoch: 1:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 1: 100% 213/213 [00:10<00:00, 20.32it/s, loss=4.55]
Epoch: 2:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 2: 100% 213/213 [00:10<00:00, 20.27it/s, loss=3.48]
Epoch: 3:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 3: 100% 213/213 [00:10<00:00, 20.38it/s, loss=2.44]
Epoch: 4:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 4: 100% 213/213 [00:10<00:00, 20.37it/s, loss=1.91]
Epoch: 5:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 5: 100% 213/213 [00:10<00:00, 20.34it/s, loss=1.58]
Epoch: 6:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 6: 100% 213/213 [00:10<00:00, 20.38it/s, loss=1.32]
Epoch: 7:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 7: 100% 213/213 [00:10<00:00, 20.37it/s, loss=1.14]
Epoch: 8:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 8: 100% 213/213 [00:10<00:00, 20.28it/s, loss=1.01]
Epoch: 9:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 9: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.89]
Epoch: 10:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 10: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.79]
Epoch: 11:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 11: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.7]
Epoch: 12:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 12: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0.61]
Epoch: 13:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 13: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.53]
Epoch: 14:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 14: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0.46]
Epoch: 15:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 15: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0.39]
Epoch: 16:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 16: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0.33]
Epoch: 17:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 17: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0.28]
Epoch: 18:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 18: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0.23]
Epoch: 19:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 19: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0.2]
Epoch: 20:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 20: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.16]
Epoch: 21:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 21: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.14]
Epoch: 22:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 22: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.11]
Epoch: 23:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 23: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0.09]
Epoch: 24:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 24: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0.08]
Epoch: 25:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 25: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0.07]
Epoch: 26:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 26: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0.06]
Epoch: 27:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 27: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.05]
Epoch: 28:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 28: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.04]
Epoch: 29:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 29: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0.04]
Epoch: 30:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 30: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0.04]
Epoch: 31:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 31: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0.03]
Epoch: 32:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 32: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.03]
Epoch: 33:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 33: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0.03]
Epoch: 34:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 34: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0.02]
Epoch: 35:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 35: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.02]
Epoch: 36:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 36: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0.02]
Epoch: 37:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 37: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0.02]
Epoch: 38:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 38: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0.01]
Epoch: 39:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 39: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0.01]
Epoch: 40:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 40: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.01]
Epoch: 41:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 41: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.01]
Epoch: 42:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 42: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.01]
Epoch: 43:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 43: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0.01]
Epoch: 44:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 44: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0.01]
Epoch: 45:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 45: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0.01]
Epoch: 46:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 46: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.01]
Epoch: 47:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 47: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0.01]
Epoch: 48:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 48: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0.01]
Epoch: 49:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 49: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0.01]
Epoch: 50:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 50: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.01]
Epoch: 51:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 51: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.01]
Epoch: 52:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 52: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0.01]
Epoch: 53:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 53: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 54:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 54: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 55:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 55: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0.01]
Epoch: 56:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 56: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.01]
Epoch: 57:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 57: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 58:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 58: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 59:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 59: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0.01]
Epoch: 60:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 60: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0.01]
Epoch: 61:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 61: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 62:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 62: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0]
Epoch: 63:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 63: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 64:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 64: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 65:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 65: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 66:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 66: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 67:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 67: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 68:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 68: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 69:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 69: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 70:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 70: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 71:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 71: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 72:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 72: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 73:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 73: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 74:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 74: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0]
Epoch: 75:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 75: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 76:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 76: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0.01]
Epoch: 77:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 77: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 78:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 78: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 79:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 79: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 80:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 80: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 81:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 81: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 82:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 82: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 83:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 83: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 84:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 84: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 85:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 85: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 86:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 86: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 87:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 87: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 88:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 88: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 89:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 89: 100% 213/213 [00:10<00:00, 19.95it/s, loss=0]
Epoch: 90:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 90: 100% 213/213 [00:10<00:00, 19.80it/s, loss=0]
Epoch: 91:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 91: 100% 213/213 [00:10<00:00, 19.75it/s, loss=0]
Epoch: 92:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 92: 100% 213/213 [00:10<00:00, 19.80it/s, loss=0]
Epoch: 93:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 93: 100% 213/213 [00:10<00:00, 19.84it/s, loss=0]
Epoch: 94:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 94: 100% 213/213 [00:10<00:00, 19.82it/s, loss=0]
Epoch: 95:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 95: 100% 213/213 [00:10<00:00, 19.71it/s, loss=0]
Epoch: 96:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 96: 100% 213/213 [00:10<00:00, 19.85it/s, loss=0]
Epoch: 97:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 97: 100% 213/213 [00:10<00:00, 19.78it/s, loss=0]
Epoch: 98:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 98: 100% 213/213 [00:10<00:00, 19.78it/s, loss=0]
Epoch: 99:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 99: 100% 213/213 [00:10<00:00, 19.78it/s, loss=0]
Epoch: 100:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 100: 100% 213/213 [00:10<00:00, 19.78it/s, loss=0]
Epoch: 101:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 101: 100% 213/213 [00:10<00:00, 19.70it/s, loss=0]
Epoch: 102:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 102: 100% 213/213 [00:10<00:00, 19.79it/s, loss=0]
Epoch: 103:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 103: 100% 213/213 [00:10<00:00, 19.80it/s, loss=0]
Epoch: 104:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 104: 100% 213/213 [00:10<00:00, 19.79it/s, loss=0]
Epoch: 105:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 105: 100% 213/213 [00:10<00:00, 19.86it/s, loss=0]
Epoch: 106:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 106: 100% 213/213 [00:10<00:00, 20.10it/s, loss=0]
Epoch: 107:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 107: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 108:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 108: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 109:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 109: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 110:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 110: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0]
Epoch: 111:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 111: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 112:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 112: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 113:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 113: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 114:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 114: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 115:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 115: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 116:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 116: 100% 213/213 [00:10<00:00, 19.78it/s, loss=0]
Epoch: 117:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 117: 100% 213/213 [00:10<00:00, 19.97it/s, loss=0]
Epoch: 118:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 118: 100% 213/213 [00:10<00:00, 19.76it/s, loss=0]
Epoch: 119:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 119: 100% 213/213 [00:10<00:00, 19.88it/s, loss=0]
Epoch: 120:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 120: 100% 213/213 [00:10<00:00, 19.81it/s, loss=0]
Epoch: 121:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 121: 100% 213/213 [00:10<00:00, 20.12it/s, loss=0]
Epoch: 122:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 122: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 123:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 123: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 124:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 124: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 125:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 125: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 126:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 126: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 127:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 127: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 128:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 128: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 129:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 129: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 130:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 130: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 131:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 131: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 132:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 132: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 133:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 133: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 134:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 134: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 135:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 135: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 136:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 136: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 137:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 137: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 138:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 138: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 139:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 139: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 140:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 140: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 141:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 141: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 142:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 142: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 143:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 143: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 144:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 144: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 145:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 145: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 146:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 146: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 147:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 147: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 148:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 148: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 149:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 149: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 150:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 150: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 151:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 151: 100% 213/213 [00:10<00:00, 20.18it/s, loss=0]
Epoch: 152:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 152: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 153:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 153: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 154:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 154: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 155:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 155: 100% 213/213 [00:10<00:00, 20.03it/s, loss=0]
Epoch: 156:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 156: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 157:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 157: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 158:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 158: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 159:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 159: 100% 213/213 [00:10<00:00, 20.18it/s, loss=0]
Epoch: 160:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 160: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 161:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 161: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 162:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 162: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 163:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 163: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 164:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 164: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 165:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 165: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 166:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 166: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 167:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 167: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 168:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 168: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 169:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 169: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 170:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 170: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 171:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 171: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 172:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 172: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 173:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 173: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 174:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 174: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 175:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 175: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 176:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 176: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0]
Epoch: 177:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 177: 100% 213/213 [00:10<00:00, 20.19it/s, loss=0]
Epoch: 178:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 178: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 179:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 179: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 180:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 180: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 181:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 181: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 182:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 182: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0]
Epoch: 183:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 183: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 184:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 184: 100% 213/213 [00:10<00:00, 20.15it/s, loss=0]
Epoch: 185:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 185: 100% 213/213 [00:10<00:00, 20.14it/s, loss=0]
Epoch: 186:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 186: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 187:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 187: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 188:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 188: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 189:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 189: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 190:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 190: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 191:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 191: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 192:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 192: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 193:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 193: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 194:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 194: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 195:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 195: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 196:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 196: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 197:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 197: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 198:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 198: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 199:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 199: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 200:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 200: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 201:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 201: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 202:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 202: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 203:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 203: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 204:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 204: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 205:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 205: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 206:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 206: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 207:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 207: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 208:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 208: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 209:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 209: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 210:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 210: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 211:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 211: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 212:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 212: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 213:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 213: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 214:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 214: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 215:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 215: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 216:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 216: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 217:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 217: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 218:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 218: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 219:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 219: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 220:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 220: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 221:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 221: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 222:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 222: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 223:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 223: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 224:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 224: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 225:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 225: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 226:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 226: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 227:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 227: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 228:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 228: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 229:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 229: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 230:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 230: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 231:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 231: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 232:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 232: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 233:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 233: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 234:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 234: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 235:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 235: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 236:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 236: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 237:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 237: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 238:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 238: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 239:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 239: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 240:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 240: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 241:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 241: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 242:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 242: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 243:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 243: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 244:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 244: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 245:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 245: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 246:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 246: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 247:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 247: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 248:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 248: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 249:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 249: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 250:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 250: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 251:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 251: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 252:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 252: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 253:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 253: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 254:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 254: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 255:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 255: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 256:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 256: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 257:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 257: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 258:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 258: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 259:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 259: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 260:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 260: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 261:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 261: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 262:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 262: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 263:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 263: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0]
Epoch: 264:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 264: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 265:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 265: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 266:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 266: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 267:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 267: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 268:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 268: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 269:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 269: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 270:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 270: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 271:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 271: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 272:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 272: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 273:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 273: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 274:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 274: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 275:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 275: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 276:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 276: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 277:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 277: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 278:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 278: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 279:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 279: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 280:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 280: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 281:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 281: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 282:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 282: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 283:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 283: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 284:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 284: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 285:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 285: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 286:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 286: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 287:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 287: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 288:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 288: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 289:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 289: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 290:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 290: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 291:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 291: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 292:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 292: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 293:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 293: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 294:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 294: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 295:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 295: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 296:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 296: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 297:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 297: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 298:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 298: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 299:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 299: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 100% 300/300 [52:50<00:00, 10.57s/it]
Iteration: 100% 24/24 [00:10<00:00,  2.33it/s, acc=1]
Iteration: 100% 165/165 [11:47<00:00,  4.29s/it, acc=0.858]
obj_pp_to_subj_pp: 18.3
cp_recursion: 51.3
pp_recursion: 48.2
subj_to_obj_proper: 94.2
prim_to_obj_proper: 91.1
prim_to_subj_proper: 100.0
LEX: 93.24
OVERALL: 85.79523809523809
In [ ]:
# extract and move the error logs
!echo "actual	expected	input" > wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!cat wu_et_al_2023_recogs_baseline_for_error_analysis.log | grep obj_pp_to_subj_pp | sed -E 's/INFO:root:Mistake \(category obj_pp_to_subj_pp\)://g' | sed -E 's/, Expected: /	/g' | sed -E 's/, input: /	/g' | sed -E "s/'//g" >> wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!head -n 10 wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!mv wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_99.tsv
actual	expected	input
 * baby ( 1 ) ; tray ( 4 ) ; * house ( 7 ) ; nmod . on ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND scream ( 8 ) AND agent ( 8 , 7 )	* baby ( 1 ) ; tray ( 4 ) ; * house ( 7 ) ; nmod . on ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND scream ( 8 ) AND agent ( 8 , 1 )	The baby on a tray in the house screamed .
 * spokesman ( 1 ) ; * house ( 4 ) ; Emma ( 6 ) ; * rose ( 8 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 )	* spokesman ( 1 ) ; * house ( 4 ) ; Emma ( 6 ) ; * rose ( 8 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )	The spokesman in the house served Emma the rose .
 donkey ( 1 ) ; * room ( 4 ) ; Ella ( 6 ) ; donut ( 8 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 6 ) AND recipient ( 5 , 8 )	donkey ( 1 ) ; * room ( 4 ) ; Ella ( 6 ) ; donut ( 8 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )	A donkey in the room sold Ella a donut .
 cat ( 1 ) ; * house ( 4 ) ; * cake ( 8 ) ; * boy ( 11 ) ; table ( 14 ) ; nmod . in ( 1 , 4 ) AND offer ( 6 ) AND theme ( 6 , 1 ) AND recipient ( 6 , 8 ) AND agent ( 6 , 11 ) AND nmod . beside ( 11 , 14 )	cat ( 1 ) ; * house ( 4 ) ; * cake ( 8 ) ; * boy ( 11 ) ; table ( 14 ) ; nmod . in ( 1 , 4 ) AND offer ( 6 ) AND recipient ( 6 , 1 ) AND theme ( 6 , 8 ) AND agent ( 6 , 11 ) AND nmod . beside ( 11 , 14 )	A cat in the house was offered the cake by the boy beside a table .
 * dog ( 1 ) ; bakery ( 4 ) ; * bag ( 7 ) ; nmod . in ( 1 , 4 ) AND sneeze ( 8 ) AND theme ( 8 , 1 ) AND agent ( 8 , 7 )	* dog ( 1 ) ; bakery ( 4 ) ; * bag ( 7 ) ; nmod . in ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND sneeze ( 8 ) AND agent ( 8 , 1 )	The dog in a bakery in the bag sneezed .
 girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 7 ) AND theme ( 8 , 10 )	girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )	A girl on the stool on the table drew a frog .
 * cake ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND paint ( 6 ) AND theme ( 6 , 1 ) AND agent ( 6 , 4 )	* cake ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND paint ( 6 ) AND theme ( 6 , 1 )	The cake in the house was painted .
 * sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 13 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )	* sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )	The sailor in a house lended a biscuit on a table to a goose .
 visitor ( 1 ) ; * pile ( 4 ) ; resident ( 7 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 4 )	visitor ( 1 ) ; * pile ( 4 ) ; resident ( 7 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )	A visitor in the pile rolled a resident .
In [ ]:
!mv wu_et_al_2023_recogs_baseline_for_error_analysis.log wu_et_al_2023_recogs_baseline_for_error_analysis_seed_99.log
In [ ]:
# baseline Wu et al 2023 model and baseline data
!python run_cogs.py --model_name ende_transformer --use_iiem --gpu 1 --train_batch_size 128 --eval_batch_size 128 --lr 0.0001 --data_path ./recogs_positional_index --output_dir ./results_recogs_positional_index_control --lfs cogs --do_train --do_test --do_gen --max_seq_len 512 --output_json --epochs 300 --seeds "43"
EncoderDecoderModel has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
  - If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
  - If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
  - If you are not the owner of the model architecture class, please contact the model code owner to update it.
Epoch: 0:   0% 0/213 [00:00<?, ?it/s]We strongly recommend passing in an `attention_mask` since your input_ids may be padded. See https://huggingface.co/docs/transformers/troubleshooting#incorrect-output-when-padding-tokens-arent-masked.
/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 0: 100% 213/213 [00:11<00:00, 18.70it/s, loss=6]
Epoch: 1:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 1: 100% 213/213 [00:10<00:00, 20.29it/s, loss=4.59]
Epoch: 2:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 2: 100% 213/213 [00:10<00:00, 20.29it/s, loss=3.54]
Epoch: 3:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 3: 100% 213/213 [00:10<00:00, 20.30it/s, loss=2.48]
Epoch: 4:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 4: 100% 213/213 [00:10<00:00, 20.32it/s, loss=1.92]
Epoch: 5:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 5: 100% 213/213 [00:10<00:00, 20.26it/s, loss=1.55]
Epoch: 6:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 6: 100% 213/213 [00:10<00:00, 20.31it/s, loss=1.26]
Epoch: 7:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 7: 100% 213/213 [00:10<00:00, 20.31it/s, loss=1.08]
Epoch: 8:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 8: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0.96]
Epoch: 9:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 9: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0.87]
Epoch: 10:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 10: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0.77]
Epoch: 11:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 11: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0.68]
Epoch: 12:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 12: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0.59]
Epoch: 13:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 13: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0.51]
Epoch: 14:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 14: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0.45]
Epoch: 15:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 15: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0.38]
Epoch: 16:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 16: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0.32]
Epoch: 17:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 17: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0.27]
Epoch: 18:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 18: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0.22]
Epoch: 19:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 19: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0.19]
Epoch: 20:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 20: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0.15]
Epoch: 21:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 21: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0.12]
Epoch: 22:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 22: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0.1]
Epoch: 23:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 23: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0.08]
Epoch: 24:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 24: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0.06]
Epoch: 25:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 25: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0.06]
Epoch: 26:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 26: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0.04]
Epoch: 27:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 27: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0.03]
Epoch: 28:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 28: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0.03]
Epoch: 29:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 29: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0.03]
Epoch: 30:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 30: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0.03]
Epoch: 31:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 31: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0.03]
Epoch: 32:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 32: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0.02]
Epoch: 33:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 33: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0.02]
Epoch: 34:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 34: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0.02]
Epoch: 35:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 35: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0.01]
Epoch: 36:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 36: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0.01]
Epoch: 37:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 37: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0.01]
Epoch: 38:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 38: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0.01]
Epoch: 39:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 39: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0.01]
Epoch: 40:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 40: 100% 213/213 [00:10<00:00, 20.22it/s, loss=0.01]
Epoch: 41:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 41: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0.01]
Epoch: 42:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 42: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0.01]
Epoch: 43:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 43: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0.01]
Epoch: 44:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 44: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0.01]
Epoch: 45:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 45: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0.01]
Epoch: 46:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 46: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 47:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 47: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 48:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 48: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 49:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 49: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 50:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 50: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 51:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 51: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 52:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 52: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0.01]
Epoch: 53:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 53: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 54:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 54: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 55:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 55: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 56:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 56: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0.01]
Epoch: 57:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 57: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 58:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 58: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0.01]
Epoch: 59:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 59: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 60:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 60: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 61:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 61: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 62:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 62: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 63:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 63: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 64:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 64: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 65:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 65: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 66:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 66: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 67:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 67: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 68:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 68: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 69:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 69: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 70:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 70: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 71:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 71: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 72:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 72: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 73:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 73: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 74:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 74: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 75:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 75: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 76:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 76: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 77:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 77: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 78:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 78: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 79:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 79: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 80:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 80: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 81:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 81: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 82:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 82: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 83:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 83: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 84:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 84: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 85:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 85: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 86:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 86: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 87:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 87: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 88:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 88: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 89:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 89: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 90:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 90: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 91:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 91: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 92:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 92: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 93:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 93: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 94:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 94: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 95:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 95: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 96:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 96: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 97:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 97: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 98:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 98: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 99:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 99: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 100:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 100: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 101:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 101: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 102:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 102: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 103:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 103: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 104:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 104: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 105:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 105: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 106:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 106: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 107:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 107: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 108:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 108: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 109:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 109: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 110:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 110: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 111:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 111: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 112:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 112: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 113:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 113: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 114:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 114: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 115:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 115: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 116:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 116: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 117:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 117: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 118:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 118: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 119:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 119: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 120:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 120: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 121:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 121: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 122:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 122: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 123:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 123: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 124:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 124: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 125:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 125: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 126:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 126: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 127:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 127: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 128:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 128: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 129:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 129: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 130:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 130: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 131:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 131: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 132:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 132: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 133:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 133: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 134:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 134: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 135:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 135: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 136:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 136: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 137:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 137: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 138:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 138: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 139:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 139: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 140:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 140: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 141:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 141: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 142:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 142: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 143:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 143: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 144:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 144: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 145:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 145: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 146:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 146: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 147:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 147: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 148:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 148: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 149:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 149: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 150:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 150: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 151:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 151: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 152:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 152: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 153:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 153: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 154:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 154: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 155:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 155: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 156:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 156: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 157:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 157: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 158:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 158: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 159:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 159: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 160:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 160: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 161:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 161: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 162:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 162: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 163:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 163: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 164:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 164: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 165:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 165: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 166:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 166: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 167:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 167: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 168:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 168: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 169:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 169: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 170:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 170: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 171:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 171: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 172:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 172: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 173:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 173: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 174:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 174: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 175:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 175: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 176:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 176: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 177:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 177: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 178:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 178: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 179:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 179: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 180:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 180: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 181:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 181: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 182:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 182: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 183:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 183: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 184:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 184: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 185:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 185: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 186:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 186: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 187:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 187: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 188:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 188: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 189:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 189: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 190:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 190: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 191:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 191: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 192:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 192: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 193:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 193: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 194:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 194: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 195:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 195: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 196:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 196: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 197:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 197: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 198:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 198: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 199:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 199: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 200:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 200: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 201:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 201: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 202:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 202: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 203:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 203: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 204:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 204: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 205:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 205: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 206:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 206: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 207:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 207: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 208:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 208: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 209:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 209: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 210:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 210: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 211:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 211: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 212:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 212: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 213:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 213: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 214:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 214: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 215:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 215: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 216:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 216: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 217:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 217: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 218:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 218: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 219:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 219: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 220:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 220: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 221:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 221: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 222:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 222: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 223:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 223: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 224:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 224: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 225:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 225: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 226:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 226: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 227:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 227: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 228:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 228: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 229:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 229: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 230:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 230: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 231:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 231: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 232:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 232: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 233:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 233: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 234:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 234: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 235:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 235: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 236:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 236: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 237:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 237: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 238:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 238: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 239:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 239: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 240:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 240: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 241:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 241: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 242:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 242: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 243:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 243: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 244:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 244: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 245:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 245: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 246:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 246: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 247:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 247: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 248:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 248: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 249:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 249: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 250:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 250: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 251:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 251: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 252:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 252: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 253:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 253: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 254:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 254: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 255:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 255: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 256:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 256: 100% 213/213 [00:10<00:00, 20.26it/s, loss=0]
Epoch: 257:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 257: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 258:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 258: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 259:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 259: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 260:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 260: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 261:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 261: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 262:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 262: 100% 213/213 [00:10<00:00, 20.23it/s, loss=0]
Epoch: 263:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 263: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 264:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 264: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 265:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 265: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 266:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 266: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 267:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 267: 100% 213/213 [00:10<00:00, 20.28it/s, loss=0]
Epoch: 268:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 268: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 269:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 269: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 270:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 270: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 271:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 271: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 272:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 272: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 273:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 273: 100% 213/213 [00:10<00:00, 20.25it/s, loss=0]
Epoch: 274:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 274: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 275:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 275: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 276:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 276: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 277:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 277: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 278:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 278: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 279:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 279: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0]
Epoch: 280:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 280: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 281:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 281: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 282:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 282: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 283:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 283: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 284:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 284: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 285:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 285: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 286:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 286: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 287:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 287: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 288:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 288: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 289:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 289: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 290:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 290: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 291:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 291: 100% 213/213 [00:10<00:00, 20.21it/s, loss=0]
Epoch: 292:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 292: 100% 213/213 [00:10<00:00, 20.31it/s, loss=0]
Epoch: 293:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 293: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 294:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 294: 100% 213/213 [00:10<00:00, 20.29it/s, loss=0]
Epoch: 295:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 295: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 296:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 296: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 297:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 297: 100% 213/213 [00:10<00:00, 20.24it/s, loss=0]
Epoch: 298:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 298: 100% 213/213 [00:10<00:00, 20.30it/s, loss=0]
Epoch: 299:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 299: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0]
Epoch: 100% 300/300 [52:46<00:00, 10.55s/it]
Iteration: 100% 24/24 [00:10<00:00,  2.33it/s, acc=1]
Iteration: 100% 165/165 [18:02<00:00,  6.56s/it, acc=0.905]
obj_pp_to_subj_pp: 20.2
cp_recursion: 52.0
pp_recursion: 61.8
subj_to_obj_proper: 88.4
prim_to_obj_proper: 83.0
prim_to_subj_proper: 100.0
LEX: 99.62666666666667
OVERALL: 90.46666666666667
In [ ]:
# extract and move the error logs
!echo "actual	expected	input" > wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!cat wu_et_al_2023_recogs_baseline_for_error_analysis.log | grep obj_pp_to_subj_pp | sed -E 's/INFO:root:Mistake \(category obj_pp_to_subj_pp\)://g' | sed -E 's/, Expected: /	/g' | sed -E 's/, input: /	/g' | sed -E "s/'//g" >> wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!head -n 10 wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
#!mv wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_43.tsv
In [ ]:
!mv wu_et_al_2023_recogs_baseline_for_error_analysis.log wu_et_al_2023_recogs_baseline_for_error_analysis_seed_43.log

double check conversion of .log to .tsv for seed 43

In [ ]:
!mv wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv  wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_43.tsv
!head -n 10 wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_43.tsv
actual	expected	input
 * baby ( 1 ) ; tray ( 4 ) ; * house ( 7 ) ; nmod . on ( 1 , 4 ) AND scream ( 8 ) AND theme ( 8 , 1 ) AND agent ( 8 , 7 )	* baby ( 1 ) ; tray ( 4 ) ; * house ( 7 ) ; nmod . on ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND scream ( 8 ) AND agent ( 8 , 1 )	The baby on a tray in the house screamed .
 * spokesman ( 1 ) ; * house ( 4 ) ; Emma ( 6 ) ; * rose ( 8 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 6 ) AND recipient ( 5 , 6 )	* spokesman ( 1 ) ; * house ( 4 ) ; Emma ( 6 ) ; * rose ( 8 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )	The spokesman in the house served Emma the rose .
 donkey ( 1 ) ; * room ( 4 ) ; Ella ( 6 ) ; donut ( 8 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 6 ) AND recipient ( 5 , 6 )	donkey ( 1 ) ; * room ( 4 ) ; Ella ( 6 ) ; donut ( 8 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )	A donkey in the room sold Ella a donut .
 cat ( 1 ) ; * house ( 4 ) ; * cake ( 8 ) ; * boy ( 11 ) ; table ( 14 ) ; nmod . in ( 1 , 4 ) AND offer ( 6 ) AND theme ( 6 , 1 ) AND recipient ( 6 , 8 ) AND agent ( 6 , 11 ) AND nmod . beside ( 11 , 14 )	cat ( 1 ) ; * house ( 4 ) ; * cake ( 8 ) ; * boy ( 11 ) ; table ( 14 ) ; nmod . in ( 1 , 4 ) AND offer ( 6 ) AND recipient ( 6 , 1 ) AND theme ( 6 , 8 ) AND agent ( 6 , 11 ) AND nmod . beside ( 11 , 14 )	A cat in the house was offered the cake by the boy beside a table .
 * dog ( 1 ) ; bakery ( 4 ) ; * bag ( 7 ) ; nmod . in ( 1 , 4 ) AND sneeze ( 8 ) AND theme ( 8 , 1 ) AND agent ( 8 , 7 )	* dog ( 1 ) ; bakery ( 4 ) ; * bag ( 7 ) ; nmod . in ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND sneeze ( 8 ) AND agent ( 8 , 1 )	The dog in a bakery in the bag sneezed .
 girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND draw ( 8 ) AND theme ( 8 , 1 ) AND agent ( 8 , 7 ) AND nmod . on ( 7 , 10 )	girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )	A girl on the stool on the table drew a frog .
 donut ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND grow ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 4 )	donut ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND grow ( 5 ) AND theme ( 5 , 1 )	A donut on a table grew .
 * cake ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND paint ( 6 ) AND theme ( 6 , 1 ) AND agent ( 6 , 4 )	* cake ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND paint ( 6 ) AND theme ( 6 , 1 )	The cake in the house was painted .
 visitor ( 1 ) ; * pile ( 4 ) ; resident ( 7 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 7 )	visitor ( 1 ) ; * pile ( 4 ) ; resident ( 7 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )	A visitor in the pile rolled a resident .
In [ ]:
# baseline Wu et al 2023 model and baseline data
!python run_cogs.py --model_name ende_transformer --use_iiem --gpu 1 --train_batch_size 128 --eval_batch_size 128 --lr 0.0001 --data_path ./recogs_positional_index --output_dir ./results_recogs_positional_index_control --lfs cogs --do_train --do_test --do_gen --max_seq_len 512 --output_json --epochs 300 --seeds "67"
EncoderDecoderModel has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
  - If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
  - If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
  - If you are not the owner of the model architecture class, please contact the model code owner to update it.
Epoch: 0:   0% 0/213 [00:00<?, ?it/s]We strongly recommend passing in an `attention_mask` since your input_ids may be padded. See https://huggingface.co/docs/transformers/troubleshooting#incorrect-output-when-padding-tokens-arent-masked.
/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 0: 100% 213/213 [00:11<00:00, 18.59it/s, loss=6.01]
Epoch: 1:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 1: 100% 213/213 [00:10<00:00, 20.37it/s, loss=4.63]
Epoch: 2:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 2: 100% 213/213 [00:10<00:00, 20.37it/s, loss=3.63]
Epoch: 3:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 3: 100% 213/213 [00:10<00:00, 20.31it/s, loss=2.54]
Epoch: 4:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 4: 100% 213/213 [00:10<00:00, 20.42it/s, loss=1.94]
Epoch: 5:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 5: 100% 213/213 [00:10<00:00, 20.43it/s, loss=1.58]
Epoch: 6:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 6: 100% 213/213 [00:10<00:00, 20.43it/s, loss=1.32]
Epoch: 7:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 7: 100% 213/213 [00:10<00:00, 20.43it/s, loss=1.15]
Epoch: 8:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 8: 100% 213/213 [00:10<00:00, 20.43it/s, loss=1.03]
Epoch: 9:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 9: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0.92]
Epoch: 10:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 10: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.83]
Epoch: 11:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 11: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.74]
Epoch: 12:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 12: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0.67]
Epoch: 13:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 13: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0.6]
Epoch: 14:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 14: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.52]
Epoch: 15:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 15: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0.46]
Epoch: 16:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 16: 100% 213/213 [00:10<00:00, 20.39it/s, loss=0.38]
Epoch: 17:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 17: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0.34]
Epoch: 18:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 18: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0.29]
Epoch: 19:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 19: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0.26]
Epoch: 20:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 20: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.22]
Epoch: 21:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 21: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0.2]
Epoch: 22:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 22: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.18]
Epoch: 23:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 23: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.17]
Epoch: 24:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 24: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.14]
Epoch: 25:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 25: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.14]
Epoch: 26:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 26: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.12]
Epoch: 27:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 27: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0.1]
Epoch: 28:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 28: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0.1]
Epoch: 29:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 29: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0.1]
Epoch: 30:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 30: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0.1]
Epoch: 31:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 31: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.08]
Epoch: 32:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 32: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.08]
Epoch: 33:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 33: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.07]
Epoch: 34:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 34: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.07]
Epoch: 35:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 35: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0.06]
Epoch: 36:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 36: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0.06]
Epoch: 37:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 37: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.06]
Epoch: 38:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 38: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0.05]
Epoch: 39:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 39: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0.05]
Epoch: 40:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 40: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.05]
Epoch: 41:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 41: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.04]
Epoch: 42:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 42: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.05]
Epoch: 43:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 43: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.04]
Epoch: 44:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 44: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0.04]
Epoch: 45:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 45: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0.05]
Epoch: 46:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 46: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0.04]
Epoch: 47:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 47: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0.04]
Epoch: 48:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 48: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.04]
Epoch: 49:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 49: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.03]
Epoch: 50:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 50: 100% 213/213 [00:10<00:00, 20.32it/s, loss=0.03]
Epoch: 51:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 51: 100% 213/213 [00:10<00:00, 20.39it/s, loss=0.03]
Epoch: 52:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 52: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.03]
Epoch: 53:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 53: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.03]
Epoch: 54:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 54: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0.03]
Epoch: 55:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 55: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0.03]
Epoch: 56:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 56: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0.03]
Epoch: 57:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 57: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.02]
Epoch: 58:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 58: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.03]
Epoch: 59:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 59: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.02]
Epoch: 60:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 60: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.03]
Epoch: 61:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 61: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0.02]
Epoch: 62:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 62: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0.02]
Epoch: 63:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 63: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.02]
Epoch: 64:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 64: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.02]
Epoch: 65:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 65: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.02]
Epoch: 66:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 66: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.02]
Epoch: 67:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 67: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.02]
Epoch: 68:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 68: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.02]
Epoch: 69:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 69: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.02]
Epoch: 70:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 70: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.01]
Epoch: 71:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 71: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0.01]
Epoch: 72:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 72: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.02]
Epoch: 73:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 73: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0.02]
Epoch: 74:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 74: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.01]
Epoch: 75:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 75: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0.01]
Epoch: 76:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 76: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.01]
Epoch: 77:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 77: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.01]
Epoch: 78:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 78: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0.01]
Epoch: 79:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 79: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.01]
Epoch: 80:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 80: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0.01]
Epoch: 81:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 81: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0.01]
Epoch: 82:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 82: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.01]
Epoch: 83:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 83: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.01]
Epoch: 84:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 84: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.01]
Epoch: 85:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 85: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0.01]
Epoch: 86:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 86: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.01]
Epoch: 87:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 87: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.01]
Epoch: 88:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 88: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0.01]
Epoch: 89:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 89: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0.01]
Epoch: 90:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 90: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0.01]
Epoch: 91:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 91: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.01]
Epoch: 92:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 92: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.01]
Epoch: 93:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 93: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.01]
Epoch: 94:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 94: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0.01]
Epoch: 95:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 95: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0.01]
Epoch: 96:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 96: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 97:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 97: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 98:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 98: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.01]
Epoch: 99:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 99: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.01]
Epoch: 100:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 100: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 101:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 101: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0.01]
Epoch: 102:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 102: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0.01]
Epoch: 103:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 103: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0.01]
Epoch: 104:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 104: 100% 213/213 [00:10<00:00, 20.39it/s, loss=0]
Epoch: 105:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 105: 100% 213/213 [00:10<00:00, 20.39it/s, loss=0.01]
Epoch: 106:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 106: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0.01]
Epoch: 107:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 107: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 108:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 108: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 109:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 109: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 110:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 110: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 111:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 111: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 112:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 112: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 113:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 113: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 114:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 114: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 115:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 115: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 116:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 116: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 117:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 117: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 118:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 118: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 119:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 119: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 120:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 120: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 121:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 121: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 122:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 122: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 123:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 123: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0.01]
Epoch: 124:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 124: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0.01]
Epoch: 125:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 125: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 126:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 126: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 127:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 127: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 128:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 128: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 129:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 129: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 130:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 130: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 131:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 131: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 132:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 132: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0]
Epoch: 133:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 133: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 134:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 134: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 135:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 135: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 136:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 136: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 137:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 137: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 138:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 138: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 139:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 139: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.01]
Epoch: 140:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 140: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 141:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 141: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 142:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 142: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 143:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 143: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 144:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 144: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 145:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 145: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 146:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 146: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 147:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 147: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 148:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 148: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 149:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 149: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 150:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 150: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 151:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 151: 100% 213/213 [00:10<00:00, 20.39it/s, loss=0]
Epoch: 152:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 152: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 153:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 153: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 154:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 154: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 155:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 155: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 156:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 156: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 157:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 157: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 158:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 158: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 159:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 159: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 160:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 160: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 161:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 161: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 162:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 162: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 163:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 163: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 164:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 164: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 165:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 165: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 166:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 166: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 167:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 167: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0]
Epoch: 168:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 168: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 169:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 169: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 170:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 170: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 171:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 171: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 172:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 172: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 173:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 173: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 174:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 174: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 175:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 175: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 176:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 176: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 177:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 177: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 178:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 178: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 179:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 179: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 180:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 180: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 181:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 181: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 182:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 182: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 183:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 183: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 184:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 184: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 185:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 185: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 186:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 186: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 187:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 187: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 188:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 188: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 189:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 189: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 190:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 190: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 191:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 191: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 192:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 192: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 193:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 193: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 194:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 194: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 195:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 195: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 196:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 196: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 197:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 197: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 198:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 198: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 199:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 199: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 200:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 200: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 201:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 201: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 202:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 202: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 203:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 203: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 204:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 204: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 205:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 205: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 206:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 206: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 207:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 207: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 208:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 208: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 209:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 209: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 210:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 210: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 211:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 211: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 212:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 212: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 213:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 213: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 214:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 214: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0]
Epoch: 215:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 215: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 216:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 216: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 217:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 217: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 218:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 218: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 219:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 219: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 220:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 220: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 221:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 221: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 222:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 222: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 223:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 223: 100% 213/213 [00:10<00:00, 20.39it/s, loss=0]
Epoch: 224:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 224: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 225:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 225: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 226:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 226: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 227:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 227: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 228:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 228: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 229:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 229: 100% 213/213 [00:10<00:00, 20.39it/s, loss=0]
Epoch: 230:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 230: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 231:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 231: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 232:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 232: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 233:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 233: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 234:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 234: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 235:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 235: 100% 213/213 [00:10<00:00, 20.39it/s, loss=0]
Epoch: 236:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 236: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 237:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 237: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 238:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 238: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 239:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 239: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 240:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 240: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 241:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 241: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 242:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 242: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 243:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 243: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 244:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 244: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 245:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 245: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 246:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 246: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 247:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 247: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 248:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 248: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 249:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 249: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 250:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 250: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 251:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 251: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 252:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 252: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 253:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 253: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 254:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 254: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 255:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 255: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 256:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 256: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 257:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 257: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 258:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 258: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 259:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 259: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 260:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 260: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 261:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 261: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 262:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 262: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 263:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 263: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 264:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 264: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 265:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 265: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 266:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 266: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 267:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 267: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 268:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 268: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 269:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 269: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 270:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 270: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 271:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 271: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 272:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 272: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 273:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 273: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 274:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 274: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 275:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 275: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 276:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 276: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 277:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 277: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 278:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 278: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 279:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 279: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 280:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 280: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 281:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 281: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 282:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 282: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 283:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 283: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 284:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 284: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 285:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 285: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 286:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 286: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 287:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 287: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 288:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 288: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 289:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 289: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 290:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 290: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0]
Epoch: 291:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 291: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 292:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 292: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 293:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 293: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 294:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 294: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 295:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 295: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 296:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 296: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 297:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 297: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 298:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 298: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 299:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 299: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 100% 300/300 [52:32<00:00, 10.51s/it]
Iteration: 100% 24/24 [00:10<00:00,  2.35it/s, acc=1]
Iteration: 100% 165/165 [23:05<00:00,  8.39s/it, acc=0.879]
obj_pp_to_subj_pp: 18.1
cp_recursion: 52.8
pp_recursion: 32.0
subj_to_obj_proper: 82.8
prim_to_obj_proper: 63.4
prim_to_subj_proper: 100.0
LEX: 99.73333333333333
OVERALL: 87.86190476190477
In [ ]:
# extract and move the error logs
!echo "actual	expected	input" > wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!cat wu_et_al_2023_recogs_baseline_for_error_analysis.log | grep obj_pp_to_subj_pp | sed -E 's/INFO:root:Mistake \(category obj_pp_to_subj_pp\)://g' | sed -E 's/, Expected: /	/g' | sed -E 's/, input: /	/g' | sed -E "s/'//g" >> wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!head -n 10 wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!mv wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_67.tsv
actual	expected	input
 * baby ( 1 ) ; tray ( 4 ) ; * house ( 7 ) ; nmod . on ( 1 , 4 ) AND scream ( 8 ) AND theme ( 8 , 1 ) AND agent ( 8 , 7 )	* baby ( 1 ) ; tray ( 4 ) ; * house ( 7 ) ; nmod . on ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND scream ( 8 ) AND agent ( 8 , 1 )	The baby on a tray in the house screamed .
 * spokesman ( 1 ) ; * house ( 4 ) ; Emma ( 6 ) ; * rose ( 8 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 6 ) AND recipient ( 5 , 8 )	* spokesman ( 1 ) ; * house ( 4 ) ; Emma ( 6 ) ; * rose ( 8 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )	The spokesman in the house served Emma the rose .
 donkey ( 1 ) ; * room ( 4 ) ; Ella ( 6 ) ; donut ( 8 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 6 ) AND recipient ( 5 , 8 )	donkey ( 1 ) ; * room ( 4 ) ; Ella ( 6 ) ; donut ( 8 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )	A donkey in the room sold Ella a donut .
 cat ( 1 ) ; * house ( 4 ) ; * cake ( 8 ) ; * boy ( 11 ) ; table ( 14 ) ; nmod . in ( 1 , 4 ) AND offer ( 6 ) AND theme ( 6 , 1 ) AND recipient ( 6 , 8 ) AND agent ( 6 , 11 ) AND nmod . beside ( 11 , 14 )	cat ( 1 ) ; * house ( 4 ) ; * cake ( 8 ) ; * boy ( 11 ) ; table ( 14 ) ; nmod . in ( 1 , 4 ) AND offer ( 6 ) AND recipient ( 6 , 1 ) AND theme ( 6 , 8 ) AND agent ( 6 , 11 ) AND nmod . beside ( 11 , 14 )	A cat in the house was offered the cake by the boy beside a table .
 * dog ( 1 ) ; bakery ( 4 ) ; * bag ( 7 ) ; nmod . in ( 1 , 4 ) AND sneeze ( 8 ) AND theme ( 8 , 1 ) AND agent ( 8 , 7 )	* dog ( 1 ) ; bakery ( 4 ) ; * bag ( 7 ) ; nmod . in ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND sneeze ( 8 ) AND agent ( 8 , 1 )	The dog in a bakery in the bag sneezed .
 girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND theme ( 8 , 1 ) AND agent ( 8 , 10 )	girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )	A girl on the stool on the table drew a frog .
 donut ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 )	donut ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND grow ( 5 ) AND theme ( 5 , 1 )	A donut on a table grew .
 * cake ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND paint ( 6 ) AND theme ( 6 , 1 ) AND agent ( 6 , 4 )	* cake ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND paint ( 6 ) AND theme ( 6 , 1 )	The cake in the house was painted .
 * sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 13 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )	* sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )	The sailor in a house lended a biscuit on a table to a goose .
In [ ]:
!mv wu_et_al_2023_recogs_baseline_for_error_analysis.log wu_et_al_2023_recogs_baseline_for_error_analysis_seed_67.log
In [ ]:
# baseline Wu et al 2023 model and baseline data
!python run_cogs.py --model_name ende_transformer --use_iiem --gpu 1 --train_batch_size 128 --eval_batch_size 128 --lr 0.0001 --data_path ./recogs_positional_index --output_dir ./results_recogs_positional_index_control --lfs cogs --do_train --do_test --do_gen --max_seq_len 512 --output_json --epochs 300 --seeds "78"
EncoderDecoderModel has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
  - If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
  - If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
  - If you are not the owner of the model architecture class, please contact the model code owner to update it.
Epoch: 0:   0% 0/213 [00:00<?, ?it/s]We strongly recommend passing in an `attention_mask` since your input_ids may be padded. See https://huggingface.co/docs/transformers/troubleshooting#incorrect-output-when-padding-tokens-arent-masked.
/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 0: 100% 213/213 [00:11<00:00, 18.59it/s, loss=5.96]
Epoch: 1:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 1: 100% 213/213 [00:10<00:00, 20.50it/s, loss=4.65]
Epoch: 2:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 2: 100% 213/213 [00:10<00:00, 20.41it/s, loss=3.57]
Epoch: 3:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 3: 100% 213/213 [00:10<00:00, 20.49it/s, loss=2.5]
Epoch: 4:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 4: 100% 213/213 [00:10<00:00, 20.50it/s, loss=1.94]
Epoch: 5:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 5: 100% 213/213 [00:10<00:00, 20.49it/s, loss=1.61]
Epoch: 6:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 6: 100% 213/213 [00:10<00:00, 20.51it/s, loss=1.35]
Epoch: 7:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 7: 100% 213/213 [00:10<00:00, 20.49it/s, loss=1.17]
Epoch: 8:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 8: 100% 213/213 [00:10<00:00, 20.43it/s, loss=1.03]
Epoch: 9:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 9: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0.91]
Epoch: 10:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 10: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.82]
Epoch: 11:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 11: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.72]
Epoch: 12:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 12: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0.63]
Epoch: 13:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 13: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0.55]
Epoch: 14:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 14: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0.48]
Epoch: 15:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 15: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.42]
Epoch: 16:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 16: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.36]
Epoch: 17:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 17: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.32]
Epoch: 18:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 18: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.27]
Epoch: 19:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 19: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.24]
Epoch: 20:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 20: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0.21]
Epoch: 21:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 21: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.17]
Epoch: 22:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 22: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0.16]
Epoch: 23:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 23: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.14]
Epoch: 24:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 24: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0.14]
Epoch: 25:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 25: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.12]
Epoch: 26:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 26: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0.11]
Epoch: 27:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 27: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.09]
Epoch: 28:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 28: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.09]
Epoch: 29:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 29: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.07]
Epoch: 30:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 30: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0.07]
Epoch: 31:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 31: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.06]
Epoch: 32:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 32: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0.06]
Epoch: 33:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 33: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0.05]
Epoch: 34:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 34: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0.05]
Epoch: 35:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 35: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0.04]
Epoch: 36:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 36: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0.04]
Epoch: 37:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 37: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.04]
Epoch: 38:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 38: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0.03]
Epoch: 39:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 39: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0.03]
Epoch: 40:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 40: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0.02]
Epoch: 41:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 41: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.03]
Epoch: 42:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 42: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.02]
Epoch: 43:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 43: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.02]
Epoch: 44:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 44: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0.02]
Epoch: 45:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 45: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0.02]
Epoch: 46:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 46: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.02]
Epoch: 47:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 47: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.01]
Epoch: 48:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 48: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.01]
Epoch: 49:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 49: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0.01]
Epoch: 50:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 50: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.01]
Epoch: 51:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 51: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.01]
Epoch: 52:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 52: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0.02]
Epoch: 53:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 53: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.01]
Epoch: 54:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 54: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.01]
Epoch: 55:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 55: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0.01]
Epoch: 56:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 56: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0.01]
Epoch: 57:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 57: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0.01]
Epoch: 58:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 58: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0.01]
Epoch: 59:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 59: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0.01]
Epoch: 60:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 60: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0.01]
Epoch: 61:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 61: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.01]
Epoch: 62:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 62: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0.01]
Epoch: 63:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 63: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.01]
Epoch: 64:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 64: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.01]
Epoch: 65:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 65: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 66:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 66: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 67:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 67: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 68:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 68: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 69:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 69: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.01]
Epoch: 70:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 70: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.01]
Epoch: 71:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 71: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 72:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 72: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 73:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 73: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 74:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 74: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 75:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 75: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 76:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 76: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 77:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 77: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 78:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 78: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 79:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 79: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.01]
Epoch: 80:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 80: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0.01]
Epoch: 81:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 81: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 82:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 82: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 83:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 83: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 84:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 84: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 85:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 85: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 86:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 86: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 87:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 87: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 88:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 88: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 89:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 89: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 90:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 90: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 91:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 91: 100% 213/213 [00:10<00:00, 20.39it/s, loss=0]
Epoch: 92:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 92: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 93:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 93: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 94:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 94: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 95:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 95: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 96:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 96: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 97:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 97: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 98:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 98: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 99:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 99: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 100:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 100: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 101:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 101: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 102:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 102: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 103:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 103: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 104:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 104: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 105:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 105: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 106:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 106: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 107:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 107: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 108:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 108: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 109:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 109: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 110:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 110: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 111:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 111: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 112:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 112: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 113:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 113: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 114:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 114: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 115:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 115: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 116:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 116: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.01]
Epoch: 117:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 117: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 118:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 118: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 119:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 119: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.01]
Epoch: 120:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 120: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 121:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 121: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 122:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 122: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 123:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 123: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 124:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 124: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 125:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 125: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 126:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 126: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 127:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 127: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 128:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 128: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 129:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 129: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 130:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 130: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 131:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 131: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 132:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 132: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 133:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 133: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 134:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 134: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 135:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 135: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 136:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 136: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 137:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 137: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 138:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 138: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 139:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 139: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 140:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 140: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 141:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 141: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 142:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 142: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 143:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 143: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 144:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 144: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 145:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 145: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 146:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 146: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 147:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 147: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 148:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 148: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 149:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 149: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 150:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 150: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 151:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 151: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 152:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 152: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 153:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 153: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 154:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 154: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 155:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 155: 100% 213/213 [00:10<00:00, 20.20it/s, loss=0]
Epoch: 156:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 156: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 157:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 157: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 158:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 158: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 159:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 159: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 160:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 160: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 161:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 161: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 162:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 162: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 163:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 163: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 164:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 164: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 165:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 165: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 166:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 166: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 167:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 167: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 168:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 168: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 169:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 169: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 170:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 170: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 171:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 171: 100% 213/213 [00:10<00:00, 20.52it/s, loss=0]
Epoch: 172:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 172: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 173:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 173: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 174:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 174: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 175:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 175: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 176:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 176: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 177:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 177: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 178:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 178: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 179:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 179: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 180:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 180: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 181:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 181: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 182:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 182: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 183:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 183: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 184:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 184: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 185:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 185: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 186:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 186: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 187:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 187: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 188:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 188: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 189:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 189: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 190:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 190: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 191:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 191: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 192:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 192: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 193:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 193: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 194:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 194: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 195:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 195: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 196:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 196: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0]
Epoch: 197:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 197: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 198:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 198: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 199:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 199: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 200:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 200: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 201:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 201: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 202:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 202: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 203:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 203: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 204:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 204: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 205:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 205: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 206:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 206: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 207:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 207: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 208:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 208: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 209:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 209: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 210:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 210: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 211:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 211: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 212:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 212: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 213:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 213: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 214:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 214: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 215:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 215: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 216:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 216: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 217:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 217: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 218:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 218: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 219:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 219: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 220:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 220: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 221:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 221: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 222:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 222: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 223:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 223: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 224:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 224: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 225:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 225: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 226:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 226: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 227:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 227: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 228:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 228: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 229:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 229: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 230:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 230: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 231:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 231: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 232:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 232: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 233:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 233: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 234:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 234: 100% 213/213 [00:10<00:00, 20.33it/s, loss=0]
Epoch: 235:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 235: 100% 213/213 [00:10<00:00, 20.39it/s, loss=0]
Epoch: 236:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 236: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 237:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 237: 100% 213/213 [00:10<00:00, 20.39it/s, loss=0]
Epoch: 238:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 238: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 239:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 239: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 240:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 240: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 241:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 241: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 242:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 242: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 243:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 243: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 244:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 244: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 245:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 245: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 246:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 246: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 247:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 247: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 248:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 248: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 249:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 249: 100% 213/213 [00:10<00:00, 20.39it/s, loss=0]
Epoch: 250:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 250: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 251:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 251: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 252:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 252: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 253:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 253: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 254:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 254: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 255:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 255: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 256:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 256: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 257:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 257: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 258:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 258: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 259:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 259: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 260:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 260: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 261:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 261: 100% 213/213 [00:10<00:00, 20.39it/s, loss=0]
Epoch: 262:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 262: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 263:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 263: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 264:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 264: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 265:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 265: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 266:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 266: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 267:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 267: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 268:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 268: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 269:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 269: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 270:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 270: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 271:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 271: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 272:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 272: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 273:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 273: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 274:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 274: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 275:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 275: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 276:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 276: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 277:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 277: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 278:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 278: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 279:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 279: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 280:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 280: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 281:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 281: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 282:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 282: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 283:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 283: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 284:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 284: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 285:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 285: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 286:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 286: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 287:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 287: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 288:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 288: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 289:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 289: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 290:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 290: 100% 213/213 [00:10<00:00, 20.39it/s, loss=0]
Epoch: 291:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 291: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 292:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 292: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 293:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 293: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 294:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 294: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 295:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 295: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 296:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 296: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 297:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 297: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 298:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 298: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 299:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 299: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 100% 300/300 [52:21<00:00, 10.47s/it]
Iteration: 100% 24/24 [00:10<00:00,  2.34it/s, acc=1]
Iteration: 100% 165/165 [17:32<00:00,  6.38s/it, acc=0.875]
obj_pp_to_subj_pp: 20.0
cp_recursion: 49.3
pp_recursion: 50.1
subj_to_obj_proper: 94.7
prim_to_obj_proper: 88.8
prim_to_subj_proper: 100.0
LEX: 95.64666666666668
OVERALL: 87.5047619047619
In [ ]:
# extract and move the error logs
!echo "actual	expected	input" > wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!cat wu_et_al_2023_recogs_baseline_for_error_analysis.log | grep obj_pp_to_subj_pp | sed -E 's/INFO:root:Mistake \(category obj_pp_to_subj_pp\)://g' | sed -E 's/, Expected: /	/g' | sed -E 's/, input: /	/g' | sed -E "s/'//g" >> wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!head -n 10 wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!mv wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_78.tsv
actual	expected	input
 * baby ( 1 ) ; tray ( 4 ) ; * house ( 7 ) ; nmod . on ( 1 , 4 ) AND scream ( 8 ) AND theme ( 8 , 1 ) AND agent ( 8 , 7 )	* baby ( 1 ) ; tray ( 4 ) ; * house ( 7 ) ; nmod . on ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND scream ( 8 ) AND agent ( 8 , 1 )	The baby on a tray in the house screamed .
 * spokesman ( 1 ) ; * house ( 4 ) ; Emma ( 6 ) ; * rose ( 8 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 6 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )	* spokesman ( 1 ) ; * house ( 4 ) ; Emma ( 6 ) ; * rose ( 8 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )	The spokesman in the house served Emma the rose .
 donkey ( 1 ) ; * room ( 4 ) ; Ella ( 6 ) ; donut ( 8 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )	donkey ( 1 ) ; * room ( 4 ) ; Ella ( 6 ) ; donut ( 8 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )	A donkey in the room sold Ella a donut .
 cat ( 1 ) ; * house ( 4 ) ; * cake ( 8 ) ; * boy ( 11 ) ; table ( 14 ) ; nmod . in ( 1 , 4 ) AND offer ( 6 ) AND recipient ( 6 , 4 ) AND theme ( 6 , 8 ) AND agent ( 6 , 11 ) AND nmod . beside ( 8 , 14 )	cat ( 1 ) ; * house ( 4 ) ; * cake ( 8 ) ; * boy ( 11 ) ; table ( 14 ) ; nmod . in ( 1 , 4 ) AND offer ( 6 ) AND recipient ( 6 , 1 ) AND theme ( 6 , 8 ) AND agent ( 6 , 11 ) AND nmod . beside ( 11 , 14 )	A cat in the house was offered the cake by the boy beside a table .
 * dog ( 1 ) ; bakery ( 4 ) ; * bag ( 7 ) ; nmod . in ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND sneeze ( 8 ) AND agent ( 8 , 7 )	* dog ( 1 ) ; bakery ( 4 ) ; * bag ( 7 ) ; nmod . in ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND sneeze ( 8 ) AND agent ( 8 , 1 )	The dog in a bakery in the bag sneezed .
 girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND draw ( 8 ) AND theme ( 8 , 1 ) AND agent ( 8 , 7 ) AND nmod . on ( 7 , 10 )	girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )	A girl on the stool on the table drew a frog .
 * cake ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND paint ( 6 ) AND theme ( 6 , 1 ) AND agent ( 6 , 6 )	* cake ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND paint ( 6 ) AND theme ( 6 , 1 )	The cake in the house was painted .
 * sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )	* sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )	The sailor in a house lended a biscuit on a table to a goose .
 visitor ( 1 ) ; * pile ( 4 ) ; resident ( 7 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 7 )	visitor ( 1 ) ; * pile ( 4 ) ; resident ( 7 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )	A visitor in the pile rolled a resident .
In [ ]:
!mv wu_et_al_2023_recogs_baseline_for_error_analysis.log wu_et_al_2023_recogs_baseline_for_error_analysis_seed_78.log

connectivity issue caused incomplete saving and glitch in final command to save logs for last 2 seeds so ran again (they are random every time due to GPU anyway), not based on the results. (no code change, just fixed lag created glitch in shell command to save the output, this is also non-RASP model.)

In [ ]:
# baseline Wu et al 2023 model and baseline data
!python run_cogs.py --model_name ende_transformer --use_iiem --gpu 1 --train_batch_size 128 --eval_batch_size 128 --lr 0.0001 --data_path ./recogs_positional_index --output_dir ./results_recogs_positional_index_control --lfs cogs --do_train --do_test --do_gen --max_seq_len 512 --output_json --epochs 300 --seeds "89"
EncoderDecoderModel has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
  - If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
  - If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
  - If you are not the owner of the model architecture class, please contact the model code owner to update it.
Epoch: 0:   0% 0/213 [00:00<?, ?it/s]We strongly recommend passing in an `attention_mask` since your input_ids may be padded. See https://huggingface.co/docs/transformers/troubleshooting#incorrect-output-when-padding-tokens-arent-masked.
/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 0: 100% 213/213 [00:11<00:00, 18.87it/s, loss=6.03]
Epoch: 1:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 1: 100% 213/213 [00:10<00:00, 20.48it/s, loss=4.65]
Epoch: 2:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 2: 100% 213/213 [00:10<00:00, 20.42it/s, loss=3.6]
Epoch: 3:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 3: 100% 213/213 [00:10<00:00, 20.39it/s, loss=2.53]
Epoch: 4:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 4: 100% 213/213 [00:10<00:00, 20.48it/s, loss=1.97]
Epoch: 5:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 5: 100% 213/213 [00:10<00:00, 20.48it/s, loss=1.61]
Epoch: 6:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 6: 100% 213/213 [00:10<00:00, 20.47it/s, loss=1.34]
Epoch: 7:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 7: 100% 213/213 [00:10<00:00, 20.48it/s, loss=1.16]
Epoch: 8:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 8: 100% 213/213 [00:10<00:00, 20.40it/s, loss=1.03]
Epoch: 9:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 9: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0.93]
Epoch: 10:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 10: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.84]
Epoch: 11:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 11: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0.74]
Epoch: 12:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 12: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.66]
Epoch: 13:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 13: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0.58]
Epoch: 14:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 14: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0.5]
Epoch: 15:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 15: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.42]
Epoch: 16:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 16: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.37]
Epoch: 17:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 17: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0.31]
Epoch: 18:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 18: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0.26]
Epoch: 19:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 19: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.23]
Epoch: 20:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 20: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0.19]
Epoch: 21:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 21: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.16]
Epoch: 22:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 22: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.15]
Epoch: 23:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 23: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0.12]
Epoch: 24:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 24: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.11]
Epoch: 25:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 25: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0.1]
Epoch: 26:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 26: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.09]
Epoch: 27:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 27: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.09]
Epoch: 28:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 28: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.07]
Epoch: 29:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 29: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.06]
Epoch: 30:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 30: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.06]
Epoch: 31:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 31: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.05]
Epoch: 32:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 32: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0.06]
Epoch: 33:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 33: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0.05]
Epoch: 34:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 34: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0.05]
Epoch: 35:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 35: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0.04]
Epoch: 36:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 36: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.04]
Epoch: 37:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 37: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0.04]
Epoch: 38:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 38: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.03]
Epoch: 39:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 39: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0.03]
Epoch: 40:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 40: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0.03]
Epoch: 41:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 41: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0.03]
Epoch: 42:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 42: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.03]
Epoch: 43:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 43: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.02]
Epoch: 44:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 44: 100% 213/213 [00:10<00:00, 20.27it/s, loss=0.02]
Epoch: 45:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 45: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0.02]
Epoch: 46:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 46: 100% 213/213 [00:10<00:00, 20.39it/s, loss=0.03]
Epoch: 47:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 47: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.02]
Epoch: 48:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 48: 100% 213/213 [00:10<00:00, 20.39it/s, loss=0.02]
Epoch: 49:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 49: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.02]
Epoch: 50:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 50: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0.02]
Epoch: 51:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 51: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0.02]
Epoch: 52:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 52: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0.02]
Epoch: 53:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 53: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0.01]
Epoch: 54:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 54: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0.02]
Epoch: 55:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 55: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0.01]
Epoch: 56:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 56: 100% 213/213 [00:10<00:00, 20.53it/s, loss=0.01]
Epoch: 57:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 57: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.01]
Epoch: 58:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 58: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.01]
Epoch: 59:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 59: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0.02]
Epoch: 60:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 60: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.01]
Epoch: 61:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 61: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0.01]
Epoch: 62:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 62: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.01]
Epoch: 63:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 63: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.01]
Epoch: 64:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 64: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0.01]
Epoch: 65:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 65: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.01]
Epoch: 66:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 66: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.01]
Epoch: 67:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 67: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0.01]
Epoch: 68:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 68: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0.01]
Epoch: 69:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 69: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0.01]
Epoch: 70:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 70: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.01]
Epoch: 71:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 71: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0.01]
Epoch: 72:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 72: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.01]
Epoch: 73:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 73: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0.01]
Epoch: 74:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 74: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.01]
Epoch: 75:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 75: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0.01]
Epoch: 76:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 76: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.01]
Epoch: 77:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 77: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.01]
Epoch: 78:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 78: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.01]
Epoch: 79:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 79: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 80:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 80: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 81:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 81: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 82:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 82: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.01]
Epoch: 83:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 83: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0.01]
Epoch: 84:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 84: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.01]
Epoch: 85:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 85: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 86:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 86: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.01]
Epoch: 87:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 87: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0.01]
Epoch: 88:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 88: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.01]
Epoch: 89:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 89: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0.01]
Epoch: 90:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 90: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.01]
Epoch: 91:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 91: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 92:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 92: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0.01]
Epoch: 93:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 93: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 94:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 94: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 95:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 95: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 96:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 96: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 97:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 97: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 98:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 98: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0.01]
Epoch: 99:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 99: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 100:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 100: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0.01]
Epoch: 101:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 101: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 102:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 102: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 103:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 103: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0.01]
Epoch: 104:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 104: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 105:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 105: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 106:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 106: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 107:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 107: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 108:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 108: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 109:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 109: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.01]
Epoch: 110:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 110: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 111:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 111: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 112:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 112: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 113:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 113: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 114:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 114: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 115:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 115: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 116:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 116: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 117:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 117: 100% 213/213 [00:10<00:00, 20.52it/s, loss=0]
Epoch: 118:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 118: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 119:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 119: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 120:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 120: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0.01]
Epoch: 121:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 121: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 122:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 122: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 123:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 123: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 124:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 124: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 125:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 125: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 126:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 126: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 127:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 127: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 128:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 128: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 129:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 129: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 130:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 130: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 131:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 131: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 132:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 132: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 133:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 133: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0.01]
Epoch: 134:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 134: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 135:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 135: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 136:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 136: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 137:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 137: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 138:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 138: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 139:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 139: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 140:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 140: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 141:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 141: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 142:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 142: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 143:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 143: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 144:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 144: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 145:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 145: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 146:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 146: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 147:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 147: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 148:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 148: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 149:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 149: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 150:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 150: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 151:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 151: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 152:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 152: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 153:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 153: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 154:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 154: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 155:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 155: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 156:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 156: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 157:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 157: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 158:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 158: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 159:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 159: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 160:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 160: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 161:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 161: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 162:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 162: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 163:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 163: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 164:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 164: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 165:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 165: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 166:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 166: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 167:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 167: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 168:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 168: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 169:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 169: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 170:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 170: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 171:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 171: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 172:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 172: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 173:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 173: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 174:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 174: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 175:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 175: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 176:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 176: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 177:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 177: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 178:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 178: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 179:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 179: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 180:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 180: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 181:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 181: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 182:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 182: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 183:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 183: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 184:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 184: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 185:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 185: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 186:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 186: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 187:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 187: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 188:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 188: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 189:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 189: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 190:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 190: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 191:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 191: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 192:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 192: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 193:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 193: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 194:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 194: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 195:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 195: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 196:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 196: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 197:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 197: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 198:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 198: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 199:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 199: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 200:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 200: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 201:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 201: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 202:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 202: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 203:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 203: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 204:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 204: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 205:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 205: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 206:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 206: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 207:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 207: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 208:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 208: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 209:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 209: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 210:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 210: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 211:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 211: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 212:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 212: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 213:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 213: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 214:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 214: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 215:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 215: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 216:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 216: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 217:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 217: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 218:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 218: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 219:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 219: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 220:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 220: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 221:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 221: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 222:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 222: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 223:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 223: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 224:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 224: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 225:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 225: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 226:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 226: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 227:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 227: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 228:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 228: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 229:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 229: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 230:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 230: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 231:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 231: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 232:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 232: 100% 213/213 [00:10<00:00, 20.52it/s, loss=0]
Epoch: 233:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 233: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 234:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 234: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 235:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 235: 100% 213/213 [00:10<00:00, 20.52it/s, loss=0]
Epoch: 236:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 236: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 237:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 237: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 238:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 238: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 239:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 239: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 240:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 240: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 241:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 241: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 242:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 242: 100% 213/213 [00:10<00:00, 20.52it/s, loss=0]
Epoch: 243:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 243: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 244:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 244: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 245:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 245: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 246:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 246: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 247:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 247: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 248:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 248: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 249:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 249: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 250:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 250: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 251:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 251: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 252:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 252: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 253:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 253: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 254:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 254: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 255:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 255: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 256:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 256: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 257:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 257: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 258:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 258: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 259:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 259: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 260:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 260: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 261:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 261: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 262:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 262: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 263:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 263: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 264:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 264: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 265:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 265: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 266:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 266: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 267:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 267: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 268:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 268: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 269:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 269: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 270:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 270: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 271:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 271: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 272:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 272: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 273:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 273: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 274:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 274: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 275:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 275: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 276:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 276: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 277:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 277: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 278:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 278: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 279:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 279: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 280:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 280: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 281:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 281: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 282:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 282: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 283:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 283: 100% 213/213 [00:10<00:00, 20.48it/s, loss=0]
Epoch: 284:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 284: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 285:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 285: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 286:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 286: 100% 213/213 [00:10<00:00, 20.49it/s, loss=0]
Epoch: 287:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 287: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 288:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 288: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 289:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 289: 100% 213/213 [00:10<00:00, 20.51it/s, loss=0]
Epoch: 290:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 290: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 291:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 291: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 292:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 292: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 293:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 293: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 294:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 294: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 295:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 295: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 296:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 296: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 297:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 297: 100% 213/213 [00:10<00:00, 20.47it/s, loss=0]
Epoch: 298:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 298: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 299:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 299: 100% 213/213 [00:10<00:00, 20.50it/s, loss=0]
Epoch: 100% 300/300 [52:22<00:00, 10.47s/it]
Iteration: 100% 24/24 [00:10<00:00,  2.32it/s, acc=1]
Iteration: 100% 165/165 [17:09<00:00,  6.24s/it, acc=0.897]
obj_pp_to_subj_pp: 20.2
cp_recursion: 53.5
pp_recursion: 32.0
subj_to_obj_proper: 87.5
prim_to_obj_proper: 97.0
prim_to_subj_proper: 100.0
LEX: 99.52
OVERALL: 89.66666666666666
In [ ]:
# extract and move the error logs
!echo "actual	expected	input" > wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!cat wu_et_al_2023_recogs_baseline_for_error_analysis.log | grep obj_pp_to_subj_pp | sed -E 's/INFO:root:Mistake \(category obj_pp_to_subj_pp\)://g' | sed -E 's/, Expected: /	/g' | sed -E 's/, input: /	/g' | sed -E "s/'//g" >> wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!head -n 10 wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!mv wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_89.tsv
actual	expected	input
 * baby ( 1 ) ; tray ( 4 ) ; * house ( 7 ) ; nmod . on ( 1 , 4 ) AND scream ( 8 ) AND theme ( 8 , 1 ) AND agent ( 8 , 7 )	* baby ( 1 ) ; tray ( 4 ) ; * house ( 7 ) ; nmod . on ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND scream ( 8 ) AND agent ( 8 , 1 )	The baby on a tray in the house screamed .
 * spokesman ( 1 ) ; * house ( 4 ) ; Emma ( 6 ) ; * rose ( 8 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 6 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )	* spokesman ( 1 ) ; * house ( 4 ) ; Emma ( 6 ) ; * rose ( 8 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )	The spokesman in the house served Emma the rose .
 donkey ( 1 ) ; * room ( 4 ) ; Ella ( 6 ) ; donut ( 8 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 6 ) AND recipient ( 5 , 8 )	donkey ( 1 ) ; * room ( 4 ) ; Ella ( 6 ) ; donut ( 8 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )	A donkey in the room sold Ella a donut .
 cat ( 1 ) ; * house ( 4 ) ; * cake ( 8 ) ; * boy ( 11 ) ; table ( 14 ) ; nmod . in ( 1 , 4 ) AND offer ( 6 ) AND recipient ( 6 , 4 ) AND theme ( 6 , 8 ) AND agent ( 6 , 11 ) AND nmod . beside ( 11 , 14 )	cat ( 1 ) ; * house ( 4 ) ; * cake ( 8 ) ; * boy ( 11 ) ; table ( 14 ) ; nmod . in ( 1 , 4 ) AND offer ( 6 ) AND recipient ( 6 , 1 ) AND theme ( 6 , 8 ) AND agent ( 6 , 11 ) AND nmod . beside ( 11 , 14 )	A cat in the house was offered the cake by the boy beside a table .
 * dog ( 1 ) ; bakery ( 4 ) ; * bag ( 7 ) ; nmod . in ( 1 , 4 ) AND sneeze ( 8 ) AND theme ( 8 , 1 ) AND agent ( 8 , 7 )	* dog ( 1 ) ; bakery ( 4 ) ; * bag ( 7 ) ; nmod . in ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND sneeze ( 8 ) AND agent ( 8 , 1 )	The dog in a bakery in the bag sneezed .
 girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND draw ( 8 ) AND theme ( 8 , 1 ) AND agent ( 8 , 7 ) AND nmod . on ( 7 , 10 )	girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )	A girl on the stool on the table drew a frog .
 * cake ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND paint ( 6 ) AND theme ( 6 , 1 ) AND agent ( 6 , 4 )	* cake ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND paint ( 6 ) AND theme ( 6 , 1 )	The cake in the house was painted .
 * sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )	* sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )	The sailor in a house lended a biscuit on a table to a goose .
 visitor ( 1 ) ; * pile ( 4 ) ; resident ( 7 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 7 )	visitor ( 1 ) ; * pile ( 4 ) ; resident ( 7 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )	A visitor in the pile rolled a resident .
In [ ]:
!mv wu_et_al_2023_recogs_baseline_for_error_analysis.log wu_et_al_2023_recogs_baseline_for_error_analysis_seed_89.log
In [ ]:
# baseline Wu et al 2023 model and baseline data
!python run_cogs.py --model_name ende_transformer --use_iiem --gpu 1 --train_batch_size 128 --eval_batch_size 128 --lr 0.0001 --data_path ./recogs_positional_index --output_dir ./results_recogs_positional_index_control --lfs cogs --do_train --do_test --do_gen --max_seq_len 512 --output_json --epochs 300 --seeds "100"
EncoderDecoderModel has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From 👉v4.50👈 onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions.
  - If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes
  - If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception).
  - If you are not the owner of the model architecture class, please contact the model code owner to update it.
Epoch: 0:   0% 0/213 [00:00<?, ?it/s]We strongly recommend passing in an `attention_mask` since your input_ids may be padded. See https://huggingface.co/docs/transformers/troubleshooting#incorrect-output-when-padding-tokens-arent-masked.
/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 0: 100% 213/213 [00:11<00:00, 18.74it/s, loss=6.01]
Epoch: 1:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 1: 100% 213/213 [00:10<00:00, 20.32it/s, loss=4.57]
Epoch: 2:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 2: 100% 213/213 [00:10<00:00, 20.20it/s, loss=3.56]
Epoch: 3:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 3: 100% 213/213 [00:10<00:00, 20.32it/s, loss=2.5]
Epoch: 4:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 4: 100% 213/213 [00:10<00:00, 20.32it/s, loss=1.92]
Epoch: 5:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 5: 100% 213/213 [00:10<00:00, 20.32it/s, loss=1.58]
Epoch: 6:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 6: 100% 213/213 [00:10<00:00, 20.39it/s, loss=1.32]
Epoch: 7:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 7: 100% 213/213 [00:10<00:00, 20.39it/s, loss=1.15]
Epoch: 8:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 8: 100% 213/213 [00:10<00:00, 20.44it/s, loss=1.02]
Epoch: 9:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 9: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.92]
Epoch: 10:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 10: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.82]
Epoch: 11:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 11: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0.72]
Epoch: 12:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 12: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.64]
Epoch: 13:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 13: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.56]
Epoch: 14:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 14: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.49]
Epoch: 15:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 15: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.43]
Epoch: 16:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 16: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0.37]
Epoch: 17:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 17: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0.31]
Epoch: 18:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 18: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0.26]
Epoch: 19:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 19: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0.22]
Epoch: 20:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 20: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0.17]
Epoch: 21:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 21: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0.15]
Epoch: 22:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 22: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.12]
Epoch: 23:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 23: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.1]
Epoch: 24:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 24: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0.09]
Epoch: 25:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 25: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0.07]
Epoch: 26:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 26: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.06]
Epoch: 27:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 27: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0.06]
Epoch: 28:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 28: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0.05]
Epoch: 29:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 29: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.05]
Epoch: 30:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 30: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.04]
Epoch: 31:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 31: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0.06]
Epoch: 32:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 32: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0.04]
Epoch: 33:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 33: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0.03]
Epoch: 34:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 34: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0.04]
Epoch: 35:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 35: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0.02]
Epoch: 36:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 36: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.03]
Epoch: 37:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 37: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0.02]
Epoch: 38:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 38: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0.01]
Epoch: 39:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 39: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.02]
Epoch: 40:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 40: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0.02]
Epoch: 41:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 41: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0.01]
Epoch: 42:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 42: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.01]
Epoch: 43:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 43: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0.02]
Epoch: 44:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 44: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0.01]
Epoch: 45:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 45: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0.01]
Epoch: 46:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 46: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.01]
Epoch: 47:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 47: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0.01]
Epoch: 48:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 48: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0.01]
Epoch: 49:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 49: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0.01]
Epoch: 50:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 50: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0.01]
Epoch: 51:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 51: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0.01]
Epoch: 52:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 52: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.01]
Epoch: 53:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 53: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0.01]
Epoch: 54:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 54: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 55:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 55: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.01]
Epoch: 56:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 56: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0.01]
Epoch: 57:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 57: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0.01]
Epoch: 58:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 58: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0.01]
Epoch: 59:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 59: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 60:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 60: 100% 213/213 [00:10<00:00, 20.39it/s, loss=0.01]
Epoch: 61:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 61: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 62:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 62: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 63:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 63: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 64:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 64: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0.01]
Epoch: 65:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 65: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 66:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 66: 100% 213/213 [00:10<00:00, 20.39it/s, loss=0.01]
Epoch: 67:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 67: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0.01]
Epoch: 68:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 68: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 69:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 69: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 70:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 70: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 71:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 71: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 72:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 72: 100% 213/213 [00:10<00:00, 20.39it/s, loss=0]
Epoch: 73:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 73: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 74:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 74: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 75:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 75: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 76:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 76: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 77:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 77: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 78:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 78: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0]
Epoch: 79:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 79: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 80:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 80: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 81:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 81: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 82:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 82: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 83:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 83: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 84:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 84: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 85:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 85: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 86:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 86: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 87:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 87: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0.01]
Epoch: 88:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 88: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 89:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 89: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0]
Epoch: 90:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 90: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 91:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 91: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 92:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 92: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 93:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 93: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 94:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 94: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 95:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 95: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 96:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 96: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 97:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 97: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 98:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 98: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 99:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 99: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 100:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 100: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 101:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 101: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 102:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 102: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 103:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 103: 100% 213/213 [00:10<00:00, 20.46it/s, loss=0]
Epoch: 104:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 104: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 105:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 105: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 106:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 106: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 107:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 107: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 108:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 108: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 109:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 109: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 110:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 110: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 111:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 111: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 112:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 112: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 113:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 113: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 114:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 114: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 115:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 115: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 116:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 116: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 117:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 117: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 118:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 118: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 119:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 119: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 120:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 120: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 121:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 121: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 122:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 122: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 123:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 123: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 124:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 124: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 125:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 125: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 126:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 126: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 127:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 127: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 128:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 128: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 129:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 129: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 130:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 130: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 131:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 131: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 132:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 132: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 133:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 133: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 134:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 134: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 135:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 135: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 136:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 136: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0]
Epoch: 137:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 137: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 138:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 138: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 139:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 139: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 140:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 140: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 141:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 141: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 142:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 142: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 143:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 143: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 144:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 144: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 145:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 145: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 146:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 146: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 147:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 147: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 148:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 148: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0]
Epoch: 149:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 149: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 150:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 150: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 151:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 151: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 152:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 152: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 153:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 153: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 154:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 154: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 155:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 155: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 156:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 156: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 157:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 157: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 158:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 158: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 159:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 159: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 160:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 160: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 161:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 161: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 162:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 162: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 163:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 163: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 164:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 164: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 165:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 165: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 166:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 166: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 167:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 167: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 168:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 168: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 169:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 169: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 170:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 170: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 171:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 171: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 172:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 172: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 173:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 173: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 174:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 174: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 175:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 175: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 176:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 176: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 177:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 177: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 178:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 178: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 179:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 179: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 180:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 180: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 181:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 181: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 182:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 182: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 183:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 183: 100% 213/213 [00:10<00:00, 20.39it/s, loss=0]
Epoch: 184:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 184: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 185:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 185: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 186:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 186: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 187:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 187: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 188:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 188: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 189:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 189: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 190:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 190: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 191:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 191: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 192:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 192: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 193:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 193: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 194:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 194: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 195:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 195: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 196:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 196: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 197:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 197: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 198:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 198: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 199:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 199: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 200:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 200: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 201:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 201: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 202:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 202: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 203:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 203: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 204:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 204: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 205:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 205: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 206:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 206: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 207:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 207: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 208:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 208: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 209:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 209: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 210:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 210: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 211:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 211: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 212:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 212: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 213:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 213: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 214:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 214: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 215:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 215: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 216:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 216: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 217:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 217: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 218:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 218: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 219:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 219: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 220:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 220: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 221:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 221: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 222:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 222: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 223:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 223: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 224:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 224: 100% 213/213 [00:10<00:00, 20.39it/s, loss=0]
Epoch: 225:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 225: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 226:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 226: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 227:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 227: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 228:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 228: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 229:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 229: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 230:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 230: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 231:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 231: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 232:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 232: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 233:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 233: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 234:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 234: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 235:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 235: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 236:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 236: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 237:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 237: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 238:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 238: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 239:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 239: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 240:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 240: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 241:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 241: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 242:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 242: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 243:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 243: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 244:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 244: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 245:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 245: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 246:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 246: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 247:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 247: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 248:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 248: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 249:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 249: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 250:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 250: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 251:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 251: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 252:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 252: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 253:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 253: 100% 213/213 [00:10<00:00, 20.39it/s, loss=0]
Epoch: 254:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 254: 100% 213/213 [00:10<00:00, 20.45it/s, loss=0]
Epoch: 255:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 255: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 256:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 256: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 257:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 257: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 258:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 258: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 259:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 259: 100% 213/213 [00:10<00:00, 20.38it/s, loss=0]
Epoch: 260:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 260: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 261:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 261: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 262:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 262: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 263:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 263: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 264:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 264: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 265:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 265: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 266:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 266: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 267:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 267: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 268:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 268: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 269:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 269: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 270:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 270: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 271:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 271: 100% 213/213 [00:10<00:00, 20.34it/s, loss=0]
Epoch: 272:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 272: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 273:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 273: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 274:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 274: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 275:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 275: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 276:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 276: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 277:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 277: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 278:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 278: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 279:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 279: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 280:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 280: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 281:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 281: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 282:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 282: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 283:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 283: 100% 213/213 [00:10<00:00, 20.36it/s, loss=0]
Epoch: 284:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 284: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 285:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 285: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 286:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 286: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 287:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 287: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 288:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 288: 100% 213/213 [00:10<00:00, 20.37it/s, loss=0]
Epoch: 289:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 289: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 290:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 290: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 291:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 291: 100% 213/213 [00:10<00:00, 20.43it/s, loss=0]
Epoch: 292:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 292: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 293:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 293: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 294:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 294: 100% 213/213 [00:10<00:00, 20.35it/s, loss=0]
Epoch: 295:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 295: 100% 213/213 [00:10<00:00, 20.44it/s, loss=0]
Epoch: 296:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 296: 100% 213/213 [00:10<00:00, 20.42it/s, loss=0]
Epoch: 297:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 297: 100% 213/213 [00:10<00:00, 20.40it/s, loss=0]
Epoch: 298:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 298: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 299:   0% 0/213 [00:00<?, ?it/s]/content/ReCOGS/model/encoder_decoder_hf.py:828: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
  warnings.warn(DEPRECATION_WARNING, FutureWarning)
Epoch: 299: 100% 213/213 [00:10<00:00, 20.41it/s, loss=0]
Epoch: 100% 300/300 [52:29<00:00, 10.50s/it]
Iteration: 100% 24/24 [00:10<00:00,  2.33it/s, acc=1]
Iteration: 100% 165/165 [18:59<00:00,  6.91s/it, acc=0.884]
obj_pp_to_subj_pp: 16.3
cp_recursion: 53.2
pp_recursion: 30.5
subj_to_obj_proper: 97.4
prim_to_obj_proper: 81.2
prim_to_subj_proper: 99.9
LEX: 98.58666666666667
OVERALL: 88.44285714285715
In [ ]:
# extract and move the error logs
!echo "actual	expected	input" > wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!cat wu_et_al_2023_recogs_baseline_for_error_analysis.log | grep obj_pp_to_subj_pp | sed -E 's/INFO:root:Mistake \(category obj_pp_to_subj_pp\)://g' | sed -E 's/, Expected: /	/g' | sed -E 's/, input: /	/g' | sed -E "s/'//g" >> wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!head -n 10 wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv
!mv wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp.tsv wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_100.tsv
actual	expected	input
 * baby ( 1 ) ; tray ( 4 ) ; * house ( 7 ) ; nmod . on ( 1 , 4 ) AND scream ( 8 ) AND theme ( 8 , 1 ) AND agent ( 8 , 7 )	* baby ( 1 ) ; tray ( 4 ) ; * house ( 7 ) ; nmod . on ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND scream ( 8 ) AND agent ( 8 , 1 )	The baby on a tray in the house screamed .
 * spokesman ( 1 ) ; * house ( 4 ) ; Emma ( 6 ) ; * rose ( 8 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )	* spokesman ( 1 ) ; * house ( 4 ) ; Emma ( 6 ) ; * rose ( 8 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )	The spokesman in the house served Emma the rose .
 donkey ( 1 ) ; * room ( 4 ) ; Ella ( 6 ) ; donut ( 8 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 )	donkey ( 1 ) ; * room ( 4 ) ; Ella ( 6 ) ; donut ( 8 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )	A donkey in the room sold Ella a donut .
 cat ( 1 ) ; * house ( 4 ) ; * cake ( 8 ) ; * boy ( 11 ) ; table ( 14 ) ; nmod . in ( 1 , 4 ) AND offer ( 6 ) AND recipient ( 6 , 4 ) AND theme ( 6 , 8 ) AND agent ( 6 , 11 ) AND nmod . beside ( 11 , 14 )	cat ( 1 ) ; * house ( 4 ) ; * cake ( 8 ) ; * boy ( 11 ) ; table ( 14 ) ; nmod . in ( 1 , 4 ) AND offer ( 6 ) AND recipient ( 6 , 1 ) AND theme ( 6 , 8 ) AND agent ( 6 , 11 ) AND nmod . beside ( 11 , 14 )	A cat in the house was offered the cake by the boy beside a table .
 * dog ( 1 ) ; bakery ( 4 ) ; * bag ( 7 ) ; nmod . in ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND sneeze ( 8 ) AND theme ( 8 , 1 ) AND agent ( 8 , 7 )	* dog ( 1 ) ; bakery ( 4 ) ; * bag ( 7 ) ; nmod . in ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND sneeze ( 8 ) AND agent ( 8 , 1 )	The dog in a bakery in the bag sneezed .
 girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 7 ) AND theme ( 8 , 10 )	girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )	A girl on the stool on the table drew a frog .
 donut ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND grow ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 4 )	donut ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND grow ( 5 ) AND theme ( 5 , 1 )	A donut on a table grew .
 * cake ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND paint ( 6 ) AND theme ( 6 , 1 ) AND agent ( 6 , 4 )	* cake ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND paint ( 6 ) AND theme ( 6 , 1 )	The cake in the house was painted .
 * sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND theme ( 5 , 1 ) AND agent ( 5 , 13 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )	* sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )	The sailor in a house lended a biscuit on a table to a goose .
In [ ]:
!mv wu_et_al_2023_recogs_baseline_for_error_analysis.log wu_et_al_2023_recogs_baseline_for_error_analysis_seed_100.log

Raw data was saved at REDACTED FOR BLINDED REVIEW .

Switched to CPU instance from A100 now that finished training and evaluating the Wu et al 2023 baseline Encoder-Decoder Transformers

In [ ]:
!unzip /content/wu_et_al_2023_baseline_error_logs_contains_10_of_10_seeds__seeds_from_their_paper_and_seeds_plus_one_NOT_RASP_MODEL.zip
Archive:  /content/wu_et_al_2023_baseline_error_logs_contains_10_of_10_seeds__seeds_from_their_paper_and_seeds_plus_one_NOT_RASP_MODEL.zip
  inflating: wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_88.tsv  
  inflating: wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_99.tsv  
  inflating: wu_et_al_2023_recogs_baseline_for_error_analysis.log  
  inflating: wu_et_al_2023_recogs_baseline_for_error_analysis_seed_88.log  
  inflating: wu_et_al_2023_recogs_baseline_for_error_analysis_seed_99.log  
  inflating: wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_42.tsv  
  inflating: wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_66.tsv  
  inflating: wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_77.tsv  
  inflating: wu_et_al_2023_recogs_baseline_for_error_analysis_seed_42.log  
  inflating: wu_et_al_2023_recogs_baseline_for_error_analysis_seed_66.log  
  inflating: wu_et_al_2023_recogs_baseline_for_error_analysis_seed_77.log  
  inflating: wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_43.tsv  
  inflating: wu_et_al_2023_recogs_baseline_for_error_analysis_seed_43.log  
  inflating: wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_67.tsv  
  inflating: wu_et_al_2023_recogs_baseline_for_error_analysis_seed_67.log  
  inflating: wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_100.tsv  
  inflating: wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_78.tsv  
  inflating: wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_89.tsv  
  inflating: wu_et_al_2023_recogs_baseline_for_error_analysis_seed_100.log  
  inflating: wu_et_al_2023_recogs_baseline_for_error_analysis_seed_78.log  
  inflating: wu_et_al_2023_recogs_baseline_for_error_analysis_seed_89.log  
In [ ]:
!ls -lart *.tsv
-rw-r--r-- 1 root root 276428 Dec 17 21:21 wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_42.tsv
-rw-r--r-- 1 root root 257854 Dec 17 22:36 wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_66.tsv
-rw-r--r-- 1 root root 225807 Dec 17 23:49 wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_77.tsv
-rw-r--r-- 1 root root 279560 Dec 18 03:31 wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_88.tsv
-rw-r--r-- 1 root root 274832 Dec 18 04:36 wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_99.tsv
-rw-r--r-- 1 root root 257890 Dec 18 05:47 wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_43.tsv
-rw-r--r-- 1 root root 269110 Dec 18 07:13 wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_67.tsv
-rw-r--r-- 1 root root 262851 Dec 18 08:24 wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_78.tsv
-rw-r--r-- 1 root root 265509 Dec 18 11:52 wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_89.tsv
-rw-r--r-- 1 root root 272960 Dec 18 13:07 wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_100.tsv

Analyzing the collected obj_pp_to_subj_pp errors across multiple baseline Transformer training runs with random initial weights and data shuffling¶

Per the paper we expect that when the agent is left of the verb, and the agent is modified by a prepositional phrase and there is a single error in the logical form output, the error will be in the agent relationship and it will be specifically that the agent index is pointing to the prepositional noun instead of the true agent.

Now we will do that analysis per seed and then report aggregate statistics across 10 random training runs to see the variation as well as overall stats.

Analysis scripts (code)¶

In [ ]:
import pandas as pd
import subprocess
seeds = ["42", "66", "77", "88", "99", "43", "67", "78", "89", "100"]
dfs = list()
for seed in seeds:
  actual_expected_input_rows_for_obj_pp_to_subj_pp_errors_wu_et_al = pd.read_csv(f"wu_et_al_2023_recogs_baseline_for_error_analysis_obj_pp_to_subj_pp_seed_{seed}.tsv", delimiter="	")
  dfs.append(actual_expected_input_rows_for_obj_pp_to_subj_pp_errors_wu_et_al)
In [ ]:
data_per_seed = []
for actual_expected_input_rows_for_obj_pp_to_subj_pp_errors_wu_et_al in dfs:
  data = []
  for idx in range(len(actual_expected_input_rows_for_obj_pp_to_subj_pp_errors_wu_et_al)):
    datum = {
      "input": actual_expected_input_rows_for_obj_pp_to_subj_pp_errors_wu_et_al["input"].values[idx],
      "actual": actual_expected_input_rows_for_obj_pp_to_subj_pp_errors_wu_et_al["actual"].values[idx],
      "expected": actual_expected_input_rows_for_obj_pp_to_subj_pp_errors_wu_et_al["expected"].values[idx]
    }
    data.append(datum)
  data_per_seed.append(data)
In [ ]:
import subprocess
subprocess.run("pip install lark --upgrade", shell=True)

from lark import Lark, tree

# COGS grammar in Lark format per IBM CPG project (we don't use any CPG code just their description of COGS grammar from their utilities here for data preparation)
# https://github.com/IBM/cpg/blob/c3626b4e03bfc681be2c2a5b23da0b48abe6f570/src/model/cogs_data.py#L523
grammar = '''
start: s1 | s2 | s3 | s4 | vp_internal
    s1: np vp_external
    s2: np vp_passive
    s3: np vp_passive_dat
    s4: np vp_external4
    vp_external: v_unerg | v_trans_omissible_p1 | vp_external1 | vp_external2 | vp_external3 | vp_external5 | vp_external6 | vp_external7
    vp_external1: v_unacc_p1 np
    vp_external2: v_trans_omissible_p2 np
    vp_external3: v_trans_not_omissible np
    vp_external4: v_inf_taking to v_inf
    vp_external5: v_cp_taking that start
    vp_external6: v_dat_p1 np pp_iobj
    vp_external7: v_dat_p2 np np
    vp_internal: np v_unacc_p2
    vp_passive: vp_passive1 | vp_passive2 | vp_passive3 | vp_passive4 | vp_passive5 | vp_passive6 | vp_passive7 | vp_passive8
    vp_passive1: was v_trans_not_omissible_pp_p1
    vp_passive2: was v_trans_not_omissible_pp_p2 by np
    vp_passive3: was v_trans_omissible_pp_p1
    vp_passive4: was v_trans_omissible_pp_p2 by np
    vp_passive5: was v_unacc_pp_p1
    vp_passive6: was v_unacc_pp_p2 by np
    vp_passive7: was v_dat_pp_p1 pp_iobj
    vp_passive8: was v_dat_pp_p2 pp_iobj by np
    vp_passive_dat: vp_passive_dat1 | vp_passive_dat2
    vp_passive_dat1: was v_dat_pp_p3 np
    vp_passive_dat2: was v_dat_pp_p4 np by np
    np: np_prop | np_det | np_pp
    np_prop: proper_noun
    np_det: det common_noun
    np_pp: np_det pp np
    pp_iobj: to np
    det: "the" | "a"
    pp: "on" | "in" | "beside"
    was: "was"
    by: "by"
    to: "to"
    that: "that"
    common_noun: "girl" | "boy" | "cat" | "dog" | "baby" | "child" | "teacher" | "frog" | "chicken" | "mouse" | "lion" | "monkey" | "bear" | "giraffe" | "horse" | "bird" | "duck" | "bunny" | "butterfly" | "penguin" | "student" | "professor" | "monster" | "hero" | "sailor" | "lawyer" | "customer" | "scientist" | "princess" | "president" | "cow" | "crocodile" | "goose" | "hen" | "deer" | "donkey" | "bee" | "fly" | "kitty" | "tiger" | "wolf" | "zebra" | "mother" | "father" | "patient" | "manager" | "director" | "king" | "queen" | "kid" | "fish" | "moose" | "pig" | "pony" | "puppy" | "sheep" | "squirrel" | "lamb" | "turkey" | "turtle" | "doctor" | "pupil" | "prince" | "driver" | "consumer" | "writer" | "farmer" | "friend" | "judge" | "visitor" | "guest" | "servant" | "chief" | "citizen" | "champion" | "prisoner" | "captain" | "soldier" | "passenger" | "tenant" | "politician" | "resident" | "buyer" | "spokesman" | "governor" | "guard" | "creature" | "coach" | "producer" | "researcher" | "guy" | "dealer" | "duke" | "tourist" | "landlord" | "human" | "host" | "priest" | "journalist" | "poet" | "hedgehog" | "shark" | "cockroach" | "cobra" | "hippo" | "cake" | "donut" | "cookie" | "box" | "rose" | "drink" | "raisin" | "melon" | "sandwich" | "strawberry" | "ball" | "balloon" | "bat" | "block" | "book" | "crayon" | "chalk" | "doll" | "game" | "glue" | "lollipop" | "hamburger" | "banana" | "biscuit" | "muffin" | "pancake" | "pizza" | "potato" | "pretzel" | "pumpkin" | "sweetcorn" | "yogurt" | "pickle" | "jigsaw" | "pen" | "pencil" | "present" | "toy" | "cracker" | "brush" | "radio" | "cloud" | "mandarin" | "hat" | "basket" | "plant" | "flower" | "chair" | "spoon" | "pillow" | "gumball" | "scarf" | "shoe" | "jacket" | "hammer" | "bucket" | "knife" | "cup" | "plate" | "towel" | "bottle" | "bowl" | "can" | "clock" | "jar" | "penny" | "purse" | "soap" | "toothbrush" | "watch" | "newspaper" | "fig" | "bag" | "wine" | "key" | "weapon" | "brain" | "tool" | "crown" | "ring" | "leaf" | "fruit" | "mirror" | "beer" | "shirt" | "guitar" | "chemical" | "seed" | "shell" | "brick" | "bell" | "coin" | "button" | "needle" | "molecule" | "crystal" | "flag" | "nail" | "bean" | "liver" | "table" | "stage" | "bed" | "chair" | "stool" | "road" | "tree" | "box" | "surface" | "seat" | "speaker" | "computer" | "rock" | "boat" | "cabinet" | "tv" | "plate" | "desk" | "bowl" | "bench" | "shelf" | "cloth" | "piano" | "bible" | "leaflet" | "sheet" | "cupboard" | "truck" | "tray" | "notebook" | "blanket" | "deck" | "coffin" | "log" | "ladder" | "barrel" | "rug" | "canvas" | "tiger" | "towel" | "throne" | "booklet" | "sock" | "corpse" | "sofa" | "keyboard" | "book" | "pillow" | "pad" | "train" | "couch" | "bike" | "pedestal" | "platter" | "paper" | "rack" | "board" | "panel" | "tripod" | "branch" | "machine" | "floor" | "napkin" | "cookie" | "block" | "cot" | "device" | "yacht" | "dog" | "mattress" | "ball" | "stand" | "stack" | "windowsill" | "counter" | "cushion" | "hanger" | "trampoline" | "gravel" | "cake" | "carpet" | "plaque" | "boulder" | "leaf" | "mound" | "bun" | "dish" | "cat" | "podium" | "tabletop" | "beach" | "bag" | "glacier" | "brick" | "crack" | "vessel" | "futon" | "turntable" | "rag" | "chessboard" | "house" | "room" | "car" | "garden" | "box" | "cup" | "glass" | "bag" | "vehicle" | "hole" | "cabinet" | "bottle" | "shoe" | "storage" | "cot" | "vessel" | "pot" | "pit" | "tin" | "can" | "cupboard" | "envelope" | "nest" | "bush" | "coffin" | "drawer" | "container" | "basin" | "tent" | "soup" | "well" | "barrel" | "bucket" | "cage" | "sink" | "cylinder" | "parcel" | "cart" | "sack" | "trunk" | "wardrobe" | "basket" | "bin" | "fridge" | "mug" | "jar" | "corner" | "pool" | "blender" | "closet" | "pile" | "van" | "trailer" | "saucepan" | "truck" | "taxi" | "haystack" | "dumpster" | "puddle" | "bathtub" | "pod" | "tub" | "trap" | "bun" | "microwave" | "bookstore" | "package" | "cafe" | "train" | "castle" | "bunker" | "vase" | "backpack" | "tube" | "hammock" | "stadium" | "backyard" | "swamp" | "monastery" | "refrigerator" | "palace" | "cubicle" | "crib" | "condo" | "tower" | "crate" | "dungeon" | "teapot" | "tomb" | "casket" | "jeep" | "shoebox" | "wagon" | "bakery" | "fishbowl" | "kennel" | "china" | "spaceship" | "penthouse" | "pyramid" | "table" | "stage" | "bed" | "chair" | "book" | "road" | "tree" | "machine" | "house" | "seat" | "speaker" | "computer" | "rock" | "car" | "box" | "cup" | "glass" | "bag" | "flower" | "boat" | "vehicle" | "key" | "painting" | "cabinet" | "tv" | "bottle" | "cat" | "desk" | "shoe" | "mirror" | "clock" | "bench" | "bike" | "lamp" | "lion" | "piano" | "crystal" | "toy" | "duck" | "sword" | "sculpture" | "rod" | "truck" | "basket" | "bear" | "nest" | "sphere" | "bush" | "surgeon" | "poster" | "throne" | "giant" | "trophy" | "hedge" | "log" | "tent" | "ladder" | "helicopter" | "barrel" | "yacht" | "statue" | "bucket" | "skull" | "beast" | "lemon" | "whale" | "cage" | "gardner" | "fox" | "sink" | "trainee" | "dragon" | "cylinder" | "monk" | "bat" | "headmaster" | "philosopher" | "foreigner" | "worm" | "chemist" | "corpse" | "wolf" | "torch" | "sailor" | "valve" | "hammer" | "doll" | "genius" | "baron" | "murderer" | "bicycle" | "keyboard" | "stool" | "pepper" | "warrior" | "pillar" | "monkey" | "cassette" | "broker" | "bin"
    proper_noun: "emma" | "liam" | "olivia" | "noah" | "ava" | "william" | "isabella" | "james" | "sophia" | "oliver" | "charlotte" | "benjamin" | "mia" | "elijah" | "amelia" | "lucas" | "harper" | "mason" | "evelyn" | "logan" | "abigail" | "alexander" | "emily" | "ethan" | "elizabeth" | "jacob" | "mila" | "michael" | "ella" | "daniel" | "avery" | "henry" | "sofia" | "jackson" | "camila" | "sebastian" | "aria" | "aiden" | "scarlett" | "matthew" | "victoria" | "samuel" | "madison" | "david" | "luna" | "joseph" | "grace" | "carter" | "chloe" | "owen" | "penelope" | "wyatt" | "layla" | "john" | "riley" | "jack" | "zoey" | "luke" | "nora" | "jayden" | "lily" | "dylan" | "eleanor" | "grayson" | "hannah" | "levi" | "lillian" | "isaac" | "addison" | "gabriel" | "aubrey" | "julian" | "ellie" | "mateo" | "stella" | "anthony" | "natalie" | "jaxon" | "zoe" | "lincoln" | "leah" | "joshua" | "hazel" | "christopher" | "violet" | "andrew" | "aurora" | "theodore" | "savannah" | "caleb" | "audrey" | "ryan" | "brooklyn" | "asher" | "bella" | "nathan" | "claire" | "thomas" | "skylar" | "leo" | "lina" | "paula" | "charlie"
    v_trans_omissible_p1: "ate" | "painted" | "drew" | "cleaned" | "cooked" | "dusted" | "hunted" | "nursed" | "sketched" | "juggled" | "called" | "heard" | "packed" | "saw" | "noticed" | "studied" | "examined" | "observed" | "knew" | "investigated" | "baked"
    v_trans_omissible_p2: "ate" | "painted" | "drew" | "cleaned" | "cooked" | "dusted" | "hunted" | "nursed" | "sketched" | "juggled" | "called" | "heard" | "packed" | "saw" | "noticed" | "studied" | "examined" | "observed" | "knew" | "investigated" | "baked"
    v_trans_omissible_pp_p1: "eaten" | "painted" | "drawn" | "cleaned" | "cooked" | "dusted" | "hunted" | "nursed" | "sketched" | "juggled" | "called" | "heard" | "packed" | "seen" | "noticed" | "studied" | "examined" | "observed" | "known" | "investigated"
    v_trans_omissible_pp_p2: "eaten" | "painted" | "drawn" | "cleaned" | "cooked" | "dusted" | "hunted" | "nursed" | "sketched" | "juggled" | "called" | "heard" | "packed" | "seen" | "noticed" | "studied" | "examined" | "observed" | "known" | "investigated"
    v_trans_not_omissible: "liked" | "helped" | "found" | "loved" | "poked" | "admired" | "adored" | "appreciated" | "missed" | "respected" | "threw" | "tolerated" | "valued" | "worshipped" | "discovered" | "held" | "stabbed" | "touched" | "pierced" | "tossed"
    v_trans_not_omissible_pp_p1: "liked" | "helped" | "found" | "loved" | "poked" | "admired" | "adored" | "appreciated" | "missed" | "respected" | "thrown" | "tolerated" | "valued" | "worshipped" | "discovered" | "held" | "stabbed" | "touched" | "pierced" | "tossed"
    v_trans_not_omissible_pp_p2: "liked" | "helped" | "found" | "loved" | "poked" | "admired" | "adored" | "appreciated" | "missed" | "respected" | "thrown" | "tolerated" | "valued" | "worshipped" | "discovered" | "held" | "stabbed" | "touched" | "pierced" | "tossed"
    v_cp_taking: "liked" | "hoped" | "said" | "noticed" | "believed" | "confessed" | "declared" | "proved" | "thought" | "admired" | "appreciated" | "respected" | "supported" | "tolerated" | "valued" | "wished" | "dreamed" | "expected" | "imagined" | "meant"
    v_inf_taking: "wanted" | "preferred" | "needed" | "intended" | "tried" | "attempted" | "planned" | "expected" | "hoped" | "wished" | "craved" | "liked" | "hated" | "loved" | "enjoyed" | "dreamed" | "meant" | "longed" | "yearned" | "itched"
    v_unacc_p1: "rolled" | "froze" | "burned" | "shortened" | "floated" | "grew" | "slid" | "broke" | "crumpled" | "split" | "changed" | "snapped" | "disintegrated" | "collapsed" | "decomposed" | "doubled" | "improved" | "inflated" | "enlarged" | "reddened" | "shattered" | "blessed" | "squeezed"
    v_unacc_p2: "rolled" | "froze" | "burned" | "shortened" | "floated" | "grew" | "slid" | "broke" | "crumpled" | "split" | "changed" | "snapped" | "disintegrated" | "collapsed" | "decomposed" | "doubled" | "improved" | "inflated" | "enlarged" | "reddened" | "shattered" | "blessed" | "squeezed"
    v_unacc_pp_p1: "rolled" | "frozen" | "burned" | "shortened" | "floated" | "grown" | "slid" | "broken" | "crumpled" | "split" | "changed" | "snapped" | "disintegrated" | "collapsed" | "decomposed" | "doubled" | "improved" | "inflated" | "enlarged" | "reddened" | "shattered" | "blessed" | "squeezed"
    v_unacc_pp_p2: "rolled" | "frozen" | "burned" | "shortened" | "floated" | "grown" | "slid" | "broken" | "crumpled" | "split" | "changed" | "snapped" | "disintegrated" | "collapsed" | "decomposed" | "doubled" | "improved" | "inflated" | "enlarged" | "reddened" | "shattered" | "blessed" | "squeezed"
    v_unerg: "slept" | "smiled" | "laughed" | "sneezed" | "cried" | "talked" | "danced" | "jogged" | "walked" | "ran" | "napped" | "snoozed" | "screamed" | "stuttered" | "frowned" | "giggled" | "scoffed" | "snored" | "smirked" | "gasped"
    v_inf: "walk" | "run" | "sleep" | "sneeze" | "nap" | "eat" | "read" | "cook" | "hunt" | "paint" | "talk" | "dance" | "giggle" | "jog" | "smirk" | "call" | "sketch" | "dust" | "clean" | "investigate" | "crawl"
    v_dat_p1: "gave" | "lended" | "sold" | "offered" | "fed" | "passed" | "sent" | "rented" | "served" | "awarded" | "brought" | "handed" | "forwarded" | "promised" | "mailed" | "loaned" | "posted" | "returned" | "slipped" | "wired" | "teleported" | "shipped"
    v_dat_p2: "gave" | "lended" | "sold" | "offered" | "fed" | "passed" | "sent" | "rented" | "served" | "awarded" | "brought" | "handed" | "forwarded" | "promised" | "mailed" | "loaned" | "posted" | "returned" | "slipped" | "wired" | "teleported" | "shipped"
    v_dat_pp_p1: "given" | "lended" | "sold" | "offered" | "fed" | "passed" | "sent" | "rented" | "served" | "awarded" | "brought" | "handed" | "forwarded" | "promised" | "mailed" | "loaned" | "posted" | "returned" | "slipped" | "wired"
    v_dat_pp_p2: "given" | "lended" | "sold" | "offered" | "fed" | "passed" | "sent" | "rented" | "served" | "awarded" | "brought" | "handed" | "forwarded" | "promised" | "mailed" | "loaned" | "posted" | "returned" | "slipped" | "wired"
    v_dat_pp_p3: "given" | "lended" | "sold" | "offered" | "fed" | "passed" | "sent" | "rented" | "served" | "awarded" | "brought" | "handed" | "forwarded" | "promised" | "mailed" | "loaned" | "posted" | "returned" | "slipped" | "wired"
    v_dat_pp_p4: "given" | "lended" | "sold" | "offered" | "fed" | "passed" | "sent" | "rented" | "served" | "awarded" | "brought" | "handed" | "forwarded" | "promised" | "mailed" | "loaned" | "posted" | "returned" | "slipped" | "wired"
    %import common.WS
    %ignore WS
    '''

parser = Lark(grammar, start='start')

# 1st NP agent verbs (non CP)
# "v_trans_omissible_p1": "agent",
# "v_trans_omissible_p2": "agent",
# "v_trans_not_omissible": "agent",
# "v_cp_taking": "agent",
# "v_inf_taking": "agent",
# "v_unacc_p1": "agent",
# "v_unerg": "agent",
# "v_inf": "agent",
# "v_dat_p1": "agent",
# "v_dat_p2": "agent",
agent_left_of_verb_verb_type_set = set(["v_trans_omissible_p1", "v_trans_omissible_p2", "v_trans_not_omissible", "v_cp_taking", "v_inf_taking", "v_unacc_p1", "v_unerg", "v_inf", "v_dat_p1", "v_dat_p2"])

theme_left_of_verb_verb_type_set = set(
    ["v_trans_omissible_pp_p1",
     "np v_unacc_p2",
     "v_unacc_pp_p1",
     "v_unacc_pp_p2",
     "v_trans_omissible_pp_p2",
     "v_trans_not_omissible_pp_p1",
     "v_trans_not_omissible_pp_p2",
     "v_dat_pp_p1",
     "v_dat_pp_p2"
     ])

theme_right_of_verb_verb_type_set = set([
    "v_unacc_p1",
    "v_trans_omissible_p2",
    "v_trans_not_omissible",
])
theme_middle_of_dative_verb_type_set = set(["v_dat_pp_p4", "v_dat_p1"])

# for enforcing during the check of our hypothesis
# a stricter expectation that the closest prepositional noun to the left of the verb is the misassigned agent (not just any prepositional noun)
def get_verbs_with_pps_before_and_last_noun_before_first_verb_index(lark_tree_root):
  nodes = [lark_tree_root]
  verbs = []
  terminals_before_count = 0
  pps_before_counts = []
  pps_before_count = 0
  last_noun_before_first_verb_index = None
  while len(nodes) > 0:
    node = nodes[-1]
    nodes = nodes[:-1]
    node_type = node.data[:]
    if node_type[:2] == 'v_':
      pps_before_counts.append(pps_before_count)
      verbs.append(node_type)
    children = []
    for child in node.children:
      # it is a tree, no need to check for revisits
      children.append(child)
    # need to visit in a particular order to not just get verbs but pp before and the last noun before the first verb
    children.reverse() # in the one verb case this does not matter but we want to return verbs now in the order they appear in the sentence
    for node in children:
      nodes.append(node)
    if node_type[:] in ["common_noun", "proper_noun"] and len(verbs) == 0:
      last_noun_before_first_verb_index = terminals_before_count # no need to subtract 1 here as before incrementing below
    # only increment on terminals
    if len(children) == 0:
      terminals_before_count += 1
    if node_type[:] == "pp":
      pps_before_count += 1
  return verbs, pps_before_counts, last_noun_before_first_verb_index

def get_theme_side(lark_tree_root):
    verb_type = get_verbs(lark_tree_root)[0]
    if verb_type in theme_right_of_verb_verb_type_set:
        return "right"
    elif verb_type in theme_left_of_verb_verb_type_set:
        return "left"
    elif verb_type in theme_middle_of_dative_verb_type_set:
        return "middle"
    return None

def get_agent_side(lark_tree_root):
    verb_type = get_verbs(lark_tree_root)[0]
    if verb_type != None and verb_type not in agent_left_of_verb_verb_type_set:
        return "right or middle"
    elif verb_type in agent_left_of_verb_verb_type_set:
        return "left"
    #elif verb_type in agent_middle_of_dative_verb_type_set:
    #    return "middle"
    return None
In [ ]:
# log the Lark version in case there is a regression later or incompatibility
import lark
lark.__version__
Out[ ]:
'1.2.2'
In [ ]:
from collections import defaultdict
import numpy as np
In [ ]:
def is_relation_or_nmod(lf_part):
  return "recipient" in lf_part or "theme" in lf_part or "agent" in lf_part or "nmod" in lf_part

def get_right_idx_from_relation_or_nmod_str(relation_or_nmod_str):
  # redundant safety check just in case
  if not is_relation_or_nmod(relation_or_nmod_str):
    return None
  if "," in relation_or_nmod_str and ")" in relation_or_nmod_str:
    idx_str = relation_or_nmod_str.split(',')[-1].split(')')[0].strip()
    if len(idx_str) == 0:
      print(f"relation_or_nmod_str: '{relation_or_nmod_str}' could not be parsed")
      return None
    return f"{int(idx_str)}"
  else:
    return None
In [ ]:
agent_left_of_verb_errors_total = 0
total_error_count = 0
for data in data_per_seed:
  for wu_et_al_2023_baseline_error_example in data:
    actual = wu_et_al_2023_baseline_error_example["actual"]
    expected = wu_et_al_2023_baseline_error_example["expected"]
    input_text = wu_et_al_2023_baseline_error_example["input"]
    tree = parser.parse(input_text.replace(" .", "").lower().strip())
    agent_side = get_agent_side(tree)
    total_error_count += 1
    if agent_side == "left":
      agent_left_of_verb_errors_total += 1
In [ ]:
total_error_count
Out[ ]:
8077
In [ ]:
agent_left_of_verb_errors_total
Out[ ]:
4907

We are going to look at the subset of these errors which are simpler to analyze, single point errors in non-complement phrase, single verb sentences.

In [ ]:
def analyze_cogs_lf_errors_dataframe(data):
  error_counts_map = defaultdict(int)
  error_counts_single_in_sentence_map = defaultdict(int)
  agent_mismatch_matches_nmod_instead_map = defaultdict(int)
  agent_mismatch_matches_nmod_instead_single_in_sentence_map = defaultdict(int)
  example_agent_left_theme_right_single_point_mismatch_not_nmod_substitution = []
  example_agent_left_theme_right_single_point_mismatch_nmod_substitution = []
  example_agent_left_single_point_mismatch_not_nmod_substitution = []
  example_agent_left_single_point_mismatch_nmod_substitution = []
  for didx in range(len(data)):
    datum = data[didx]
    actual = datum["actual"]
    expected = datum["expected"]
    input_text = datum["input"]
    # skip CP examples for this first analysis (this parsing would not work when it is nested complement phrases)
    if "that" in input_text:
      error_counts_map["cp_skip"] += 1
      error_counts_single_in_sentence_map["cp_skip"] += 1
      continue
    actual_splits = actual.replace(" AND ", " ; ").split(";")
    actual_splits = [item.strip() for item in actual_splits]
    expected_splits = expected.replace(" AND ", " ; ").split(";")
    expected_splits = [item.strip() for item in expected_splits]
    tree = parser.parse(input_text.replace(" .", "").lower().strip())
    #verbs = get_verbs(tree)
    # for enforcing during the check of our hypothesis
    # a stricter expectation that the closest prepositional noun to the left of the verb is the misassigned agent (not just any prepositional noun)
    verbs, pps_before_counts, last_noun_before_first_verb_index = get_verbs_with_pps_before_and_last_noun_before_first_verb_index(tree)
    # uncomment below to include v_inf 2 verb case
    if len(verbs) > 1: #and not (len(verbs) == 2 and verbs[0] == "v_inf_taking" and verbs[1] == "v_inf"):
      # let's skip almost all multi-verb cases for the moment in this analysis
      error_counts_map["more_than_one_verb_not_v_inf_skip"] += 1
      error_counts_single_in_sentence_map["more_than_one_verb_not_v_inf_skip"] += 1
      continue
    agent_side = get_agent_side(tree)
    theme_side = get_theme_side(tree)
    if len(actual_splits) != len(expected_splits):
      # excluded from this analysis which focuses on single point errors to start where nothing is entirely omitted
      # this is a different number of parts in the logical form (relationships, entities), not a difference in string length
      error_counts_map[f"diff_length_skip,agent={agent_side},theme={theme_side}"] += 1
      error_counts_single_in_sentence_map[f"diff_length_skip,agent={agent_side},theme={theme_side}"] += 1
      continue
    part_missed = None
    multiple_parts_missed = False
    agent_mismatch_matches_nmod_instead = False
    agent_mismatch_is_expected_for_pp_depth = False
    expected_nmod_right_idxs = set()
    # we are only using this to check whether a noun is in the right-idx of a preposition
    for idx in range(len(expected_splits)):
      expected_part = expected_splits[idx].strip()
      part_type = expected_part.split("(")[0].strip()
      if part_type == "nmod . on" or part_type == "nmod . beside" or part_type == "nmod . in":
        right_idx = get_right_idx_from_relation_or_nmod_str(expected_part)
        if right_idx != None:
          expected_nmod_right_idxs.add(right_idx)
    # note we already excluded examples with a different number of parts in the logical forms
    # it is ok to sort and compare because we are ONLY GOING TO ANALYZE THE SINGLE PART ERROR CASE (not e.g. swapped agent and theme right indices here)
    expected_splits.sort()
    actual_splits.sort()
    for idx in range(len(expected_splits)):
      if len(actual_splits) <= idx:
        actual_part = None
      else:
        actual_part = actual_splits[idx].strip()
      expected_part = expected_splits[idx].strip()
      if actual_part != expected_part:
        print(f"mismatched part: {expected_part} (expected) != {actual_part} (actual)")
        if part_missed != None:
          multiple_parts_missed = True
        part_missed = expected_part.strip().split(" ")[0]
        actual_part_type = actual_part.strip().split(" ")[0]
        error_key = f"agent={agent_side},theme={theme_side},part={part_missed}"
        if part_missed != "theme" and part_missed != "agent" and part_missed != "recipient":
          part_missed = "other"
        elif part_missed != None and part_missed == "agent" and actual_part != None and actual_part_type == "agent" and is_relation_or_nmod(actual_part) and get_right_idx_from_relation_or_nmod_str(actual_part) in expected_nmod_right_idxs:
          # note actual part type will always be agent in this data for part_missed == "agent", but to be clear
          agent_mismatch_matches_nmod_instead = True # but we only count it if it is mismatched and the expected nmod depth for our hypothesis to be strict
          actual_agent_idx = int(get_right_idx_from_relation_or_nmod_str(actual_part))
          if last_noun_before_first_verb_index == actual_agent_idx:
            agent_mismatch_is_expected_for_pp_depth = True
            agent_mismatch_matches_nmod_instead_map[error_key] += 1
        error_counts_map[error_key] += 1
    if multiple_parts_missed:
      error_counts_single_in_sentence_map[f"multiple,agent={agent_side},theme={theme_side}"] += 1
    elif part_missed != None:
      error_key = f"agent={agent_side},theme={theme_side},part={part_missed}"
      error_counts_single_in_sentence_map[error_key] += 1
      if agent_mismatch_matches_nmod_instead and agent_mismatch_is_expected_for_pp_depth:
        agent_mismatch_matches_nmod_instead_single_in_sentence_map[error_key] += 1
        print(f"example agent error substitute nmod instead - input: {input_text}, actual: {actual}, expected: {expected}")
        if agent_side=="left" and part_missed=="agent":
          example_agent_left_single_point_mismatch_nmod_substitution.append(f"input: {input_text}\nactual:  {actual}\nexpected: {expected}\n")
          if theme_side == "right":
            example_agent_left_theme_right_single_point_mismatch_nmod_substitution.append(f"input: {input_text}\nactual:  {actual}\nexpected: {expected}\n")
      else:
        if agent_side=="left" and part_missed=="agent":
          example_agent_left_single_point_mismatch_not_nmod_substitution.append(f"input: {input_text}\nactual:  {actual}\nexpected: {expected}\n")
          if theme_side == "right":
            example_agent_left_theme_right_single_point_mismatch_not_nmod_substitution.append(f"input: {input_text}\nactual:  {actual}\nexpected: {expected}\n")
  print(error_counts_single_in_sentence_map)
  return error_counts_map, error_counts_single_in_sentence_map, agent_mismatch_matches_nmod_instead_single_in_sentence_map, example_agent_left_theme_right_single_point_mismatch_nmod_substitution, example_agent_left_theme_right_single_point_mismatch_not_nmod_substitution, example_agent_left_single_point_mismatch_nmod_substitution, example_agent_left_single_point_mismatch_not_nmod_substitution
In [ ]:
def analyze_error_counts_single_in_sentence_map(error_counts_single_in_sentence_map):
  all_agent_left_keys = [k for k in error_counts_single_in_sentence_map.keys() if k[:10] == "agent=left"]
  expected_agent_left_keys = [k for k in error_counts_single_in_sentence_map.keys() if k[:10] == "agent=left" and k[-10:]=="part=agent"]
  agent_left_has_error_in_agent_count = np.array([error_counts_single_in_sentence_map[key] for key in expected_agent_left_keys]).sum()
  all_agent_left_keys_count = np.array([error_counts_single_in_sentence_map[key] for key in all_agent_left_keys]).sum()
  fraction_in_expected_part = agent_left_has_error_in_agent_count/all_agent_left_keys_count
  return fraction_in_expected_part, agent_left_has_error_in_agent_count, all_agent_left_keys_count

def analyze_agent_mismatch_matches_nmod_instead_single_in_sentence_map(agent_mismatch_matches_nmod_instead_single_in_sentence_map):
  all_agent_left_keys_nmod_single_sentence_errors = [k for k in agent_mismatch_matches_nmod_instead_single_in_sentence_map.keys() if k[:10] == "agent=left"]
  all_agent_left_agent_mismatch_matches_nmod_instead_count = np.array([agent_mismatch_matches_nmod_instead_single_in_sentence_map[k] for k in all_agent_left_keys_nmod_single_sentence_errors]).sum()
  all_agent_left_keys = [k for k in error_counts_single_in_sentence_map.keys() if k[:10] == "agent=left"]
  expected_agent_left_keys = [k for k in error_counts_single_in_sentence_map.keys() if k[:10] == "agent=left" and k[-10:]=="part=agent"]
  all_agent_left_keys_count = np.array([error_counts_single_in_sentence_map[key] for key in all_agent_left_keys]).sum()
  agent_left_expected_categories_count = np.array([error_counts_single_in_sentence_map[key] for key in expected_agent_left_keys]).sum()
  fraction_agent_errors_where_it_was_agent_nmod_substitution = all_agent_left_agent_mismatch_matches_nmod_instead_count/agent_left_expected_categories_count
  fraction_agent_errors_where_it_was_agent_nmod_substitution_theme_right = agent_mismatch_matches_nmod_instead_single_in_sentence_map['agent=left,theme=right,part=agent']/error_counts_single_in_sentence_map['agent=left,theme=right,part=agent']
  count_errors_where_it_was_agent_nmod_substitution_theme_right = agent_mismatch_matches_nmod_instead_single_in_sentence_map['agent=left,theme=right,part=agent']
  count_errors_where_it_was_agent_error_theme_right = error_counts_single_in_sentence_map['agent=left,theme=right,part=agent']
  return fraction_agent_errors_where_it_was_agent_nmod_substitution, fraction_agent_errors_where_it_was_agent_nmod_substitution_theme_right, count_errors_where_it_was_agent_nmod_substitution_theme_right, count_errors_where_it_was_agent_error_theme_right
In [ ]:
fraction_in_expected_part_list = list()
fraction_agent_errors_where_it_was_nmod_substitution_list = list()
fraction_agent_errors_where_it_was_nmod_substitution_theme_right_list = list()
total_agent_left_single_point_error_count = 0
total_agent_left_single_point_error_in_agent_count = 0
total_single_point_agent_errors_theme_right_count = 0
total_single_point_agent_errors_theme_right_count_where_substitution_by_nmod = 0
total_cp_skip_count = 0
total_more_than_one_verb_not_v_inf_skip_count = 0
total_diff_length_skip_count = 0
example_agent_left_theme_right_single_point_mismatch_nmod_substitution_all = []
example_agent_left_theme_right_single_point_mismatch_not_nmod_substitution_all = []
example_agent_left_single_point_mismatch_nmod_substitution_all = []
example_agent_left_single_point_mismatch_not_nmod_substitution_all = []
for data in data_per_seed:
  error_counts_map, error_counts_single_in_sentence_map, agent_mismatch_matches_nmod_instead_single_in_sentence_map, example_agent_left_theme_right_single_point_mismatch_nmod_substitution, example_agent_left_theme_right_single_point_mismatch_not_nmod_substitution, example_agent_left_single_point_mismatch_nmod_substitution, example_agent_left_single_point_mismatch_not_nmod_substitution = analyze_cogs_lf_errors_dataframe(data)
  total_cp_skip_count += error_counts_single_in_sentence_map["cp_skip"]
  total_more_than_one_verb_not_v_inf_skip_count += error_counts_single_in_sentence_map["more_than_one_verb_not_v_inf_skip"]
  total_diff_length_skip_count += error_counts_single_in_sentence_map["diff_length_skip"]
  example_agent_left_theme_right_single_point_mismatch_nmod_substitution_all = example_agent_left_theme_right_single_point_mismatch_nmod_substitution_all + example_agent_left_theme_right_single_point_mismatch_nmod_substitution
  example_agent_left_theme_right_single_point_mismatch_not_nmod_substitution_all = example_agent_left_theme_right_single_point_mismatch_not_nmod_substitution_all + example_agent_left_theme_right_single_point_mismatch_not_nmod_substitution
  example_agent_left_single_point_mismatch_nmod_substitution_all = example_agent_left_single_point_mismatch_nmod_substitution_all + example_agent_left_single_point_mismatch_nmod_substitution
  example_agent_left_single_point_mismatch_not_nmod_substitution_all = example_agent_left_single_point_mismatch_not_nmod_substitution_all + example_agent_left_single_point_mismatch_not_nmod_substitution
  fraction_in_expected_part, agent_left_has_error_in_agent_count, all_agent_left_keys_count = analyze_error_counts_single_in_sentence_map(error_counts_single_in_sentence_map)
  total_agent_left_single_point_error_count += all_agent_left_keys_count
  total_agent_left_single_point_error_in_agent_count += agent_left_has_error_in_agent_count
  fraction_in_expected_part_list.append(fraction_in_expected_part)
  fraction_agent_errors_where_it_was_nmod_substitution, fraction_agent_errors_where_it_was_nmod_substitution_theme_right, count_errors_where_it_was_agent_nmod_substitution_theme_right, count_errors_where_it_was_agent_error_theme_right = analyze_agent_mismatch_matches_nmod_instead_single_in_sentence_map(agent_mismatch_matches_nmod_instead_single_in_sentence_map)
  total_single_point_agent_errors_theme_right_count += count_errors_where_it_was_agent_error_theme_right
  total_single_point_agent_errors_theme_right_count_where_substitution_by_nmod += count_errors_where_it_was_agent_nmod_substitution_theme_right
  fraction_agent_errors_where_it_was_nmod_substitution_list.append(fraction_agent_errors_where_it_was_nmod_substitution)
  fraction_agent_errors_where_it_was_nmod_substitution_theme_right_list.append(fraction_agent_errors_where_it_was_nmod_substitution_theme_right)
Streaming output truncated to the last 5000 lines.
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != eat ( 5 ) (actual)
mismatched part: eat ( 5 ) (expected) != girl ( 1 ) (actual)
mismatched part: girl ( 1 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != table ( 4 ) (actual)
mismatched part: table ( 4 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != bunny ( 1 ) (actual)
mismatched part: bunny ( 1 ) (expected) != draw ( 5 ) (actual)
mismatched part: draw ( 5 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on a table smiled ., actual:  girl ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != clean ( 5 ) (actual)
mismatched part: clean ( 5 ) (expected) != nmod . beside ( 1 , 4 ) (actual)
mismatched part: nmod . beside ( 1 , 4 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A tiger on a bible slept ., actual:  tiger ( 1 ) ; bible ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: tiger ( 1 ) ; bible ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 8 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != raisin ( 11 ) (actual)
mismatched part: raisin ( 11 ) (expected) != recipient ( 9 , 7 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != theme ( 9 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != cat ( 1 ) (actual)
mismatched part: cat ( 1 ) (expected) != house ( 4 ) (actual)
mismatched part: house ( 4 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != paint ( 5 ) (actual)
mismatched part: paint ( 5 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != call ( 5 ) (actual)
mismatched part: call ( 5 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != table ( 4 ) (actual)
mismatched part: table ( 4 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != recipient ( 9 , 7 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != theme ( 9 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != car ( 4 ) (actual)
mismatched part: car ( 4 ) (expected) != hear ( 5 ) (actual)
mismatched part: hear ( 5 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != servant ( 1 ) (actual)
mismatched part: servant ( 1 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != nmod . beside ( 1 , 4 ) (actual)
mismatched part: nmod . beside ( 1 , 4 ) (expected) != paint ( 5 ) (actual)
mismatched part: paint ( 5 ) (expected) != table ( 4 ) (actual)
mismatched part: table ( 4 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The cat on the canvas gave the glue beside a table to a girl ., actual:  * cat ( 1 ) ; * canvas ( 4 ) ; * glue ( 7 ) ; table ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 ), expected: * cat ( 1 ) ; * canvas ( 4 ) ; * glue ( 7 ) ; table ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 13 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: recipient ( 5 , 6 ) (expected) != recipient ( 5 , 8 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The cat on the tabletop sold the princess a cake beside a monkey ., actual:  * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 ), expected: * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: recipient ( 5 , 6 ) (expected) != recipient ( 5 , 8 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != basket ( 4 ) (actual)
mismatched part: basket ( 4 ) (expected) != cook ( 5 ) (actual)
mismatched part: cook ( 5 ) (expected) != girl ( 1 ) (actual)
mismatched part: girl ( 1 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: crumple ( 8 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: house ( 7 ) (expected) != crumple ( 8 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != house ( 7 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != container ( 4 ) (actual)
mismatched part: container ( 4 ) (expected) != draw ( 5 ) (actual)
mismatched part: draw ( 5 ) (expected) != mouse ( 1 ) (actual)
mismatched part: mouse ( 1 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
example agent error substitute nmod instead - input: The girl in the house beside a cage dusted a ball ., actual:  * girl ( 1 ) ; * house ( 4 ) ; cage ( 7 ) ; ball ( 10 ) ; nmod . in ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND dust ( 8 ) AND agent ( 8 , 7 ) AND theme ( 8 , 10 ), expected: * girl ( 1 ) ; * house ( 4 ) ; cage ( 7 ) ; ball ( 10 ) ; nmod . in ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND dust ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The mouse in the crate liked a professor on the road ., actual:  * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != sleep ( 8 ) (actual)
mismatched part: sleep ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != juggle ( 5 ) (actual)
mismatched part: juggle ( 5 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != puppy ( 1 ) (actual)
mismatched part: puppy ( 1 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != dust ( 5 ) (actual)
mismatched part: dust ( 5 ) (expected) != house ( 4 ) (actual)
mismatched part: house ( 4 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != clean ( 5 ) (actual)
mismatched part: clean ( 5 ) (expected) != girl ( 1 ) (actual)
mismatched part: girl ( 1 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != stand ( 4 ) (actual)
mismatched part: stand ( 4 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: theme ( 8 , 1 ) (expected) != theme ( 8 , 7 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != boy ( 1 ) (actual)
mismatched part: boy ( 1 ) (expected) != bunker ( 7 ) (actual)
mismatched part: bunker ( 7 ) (expected) != juggle ( 8 ) (actual)
mismatched part: juggle ( 8 ) (expected) != nmod . beside ( 1 , 4 ) (actual)
mismatched part: nmod . beside ( 1 , 4 ) (expected) != nmod . in ( 4 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != road ( 4 ) (actual)
mismatched part: road ( 4 ) (expected) != theme ( 8 , 7 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 12 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 13 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 6 , 12 ) (expected) != agent ( 6 , 9 ) (actual)
mismatched part: recipient ( 9 , 12 ) (expected) != recipient ( 9 , 7 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: recipient ( 5 , 7 ) (expected) != recipient ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in a container gave the brush in the cart to a duke ., actual:  girl ( 1 ) ; container ( 4 ) ; * brush ( 7 ) ; * cart ( 10 ) ; duke ( 13 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 ), expected: girl ( 1 ) ; container ( 4 ) ; * brush ( 7 ) ; * cart ( 10 ) ; duke ( 13 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: theme ( 8 , 1 ) (expected) != theme ( 8 , 7 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != bag ( 4 ) (actual)
mismatched part: bag ( 4 ) (expected) != frog ( 1 ) (actual)
mismatched part: frog ( 1 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != sketch ( 5 ) (actual)
mismatched part: sketch ( 5 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != investigate ( 5 ) (actual)
mismatched part: investigate ( 5 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != president ( 1 ) (actual)
mismatched part: president ( 1 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != creature ( 1 ) (actual)
mismatched part: creature ( 1 ) (expected) != draw ( 5 ) (actual)
mismatched part: draw ( 5 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != draw ( 5 ) (actual)
mismatched part: draw ( 5 ) (expected) != girl ( 1 ) (actual)
mismatched part: girl ( 1 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A boy beside a chair laughed ., actual:  boy ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 4 ), expected: boy ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != know ( 5 ) (actual)
mismatched part: know ( 5 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != girl ( 1 ) (actual)
mismatched part: girl ( 1 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != sketch ( 5 ) (actual)
mismatched part: sketch ( 5 ) (expected) != surface ( 4 ) (actual)
mismatched part: surface ( 4 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 6 , 12 ) (expected) != agent ( 6 , 9 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 8 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 6 , 11 ) (expected) != agent ( 6 , 8 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 13 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: recipient ( 5 , 7 ) (expected) != recipient ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A teacher beside a table danced ., actual:  teacher ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 4 ), expected: teacher ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: recipient ( 5 , 7 ) (expected) != recipient ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl in the tin fed the cake beside a clock to Liam ., actual:  * girl ( 1 ) ; * tin ( 4 ) ; * cake ( 7 ) ; clock ( 10 ) ; Liam ( 12 ) ; nmod . in ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 ), expected: * girl ( 1 ) ; * tin ( 4 ) ; * cake ( 7 ) ; clock ( 10 ) ; Liam ( 12 ) ; nmod . in ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 10 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A child in a car smiled ., actual:  child ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: child ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl in the tub lended Emma the cake ., actual:  * girl ( 1 ) ; * tub ( 4 ) ; Emma ( 6 ) ; * cake ( 8 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ), expected: * girl ( 1 ) ; * tub ( 4 ) ; Emma ( 6 ) ; * cake ( 8 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 12 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != pack ( 5 ) (actual)
mismatched part: pack ( 5 ) (expected) != stage ( 4 ) (actual)
mismatched part: stage ( 4 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 13 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 8 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A queen on a device screamed ., actual:  queen ( 1 ) ; device ( 4 ) ; nmod . on ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 4 ), expected: queen ( 1 ) ; device ( 4 ) ; nmod . on ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A fish on a leaflet loaned the cat the donut beside the stage ., actual:  fish ( 1 ) ; leaflet ( 4 ) ; * cat ( 7 ) ; * donut ( 9 ) ; * stage ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 ), expected: fish ( 1 ) ; leaflet ( 4 ) ; * cat ( 7 ) ; * donut ( 9 ) ; * stage ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 6 , 12 ) (expected) != agent ( 6 , 9 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != recipient ( 8 , 10 ) (actual)
mismatched part: recipient ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . beside ( 7 , 10 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != girl ( 1 ) (actual)
mismatched part: girl ( 1 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != observe ( 5 ) (actual)
mismatched part: observe ( 5 ) (expected) != stool ( 4 ) (actual)
mismatched part: stool ( 4 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: recipient ( 5 , 7 ) (expected) != recipient ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on the dog handed a cat the raisin on a table ., actual:  girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 ), expected: girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 13 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != stage ( 7 ) (actual)
mismatched part: stage ( 7 ) (expected) != stutter ( 8 ) (actual)
mismatched part: stutter ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 7 ) (actual)
mismatched part: agent ( 6 , 11 ) (expected) != agent ( 6 , 8 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A host beside a table smiled ., actual:  host ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: host ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: theme ( 8 , 9 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 6 , 11 ) (expected) != agent ( 6 , 8 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
defaultdict(<class 'int'>, {'agent=left,theme=None,part=agent': 22, 'multiple,agent=left,theme=None': 80, 'multiple,agent=right or middle,theme=middle': 34, 'agent=left,theme=right,part=agent': 5, 'diff_length_skip,agent=right or middle,theme=left': 65, 'multiple,agent=left,theme=middle': 42, 'multiple,agent=left,theme=right': 173, 'more_than_one_verb_not_v_inf_skip': 73, 'diff_length_skip,agent=left,theme=None': 56, 'multiple,agent=right or middle,theme=None': 114, 'diff_length_skip,agent=right or middle,theme=None': 50, 'cp_skip': 77, 'agent=right or middle,theme=left,part=agent': 14, 'agent=left,theme=middle,part=agent': 5, 'agent=right or middle,theme=left,part=recipient': 4, 'agent=right or middle,theme=None,part=theme': 2, 'agent=right or middle,theme=None,part=recipient': 1})
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != scream ( 8 ) (actual)
mismatched part: scream ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != sneeze ( 8 ) (actual)
mismatched part: sneeze ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != nmod . on ( 7 , 10 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in the house slept ., actual:  girl ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A bear in the car froze the key on the table ., actual:  bear ( 1 ) ; * car ( 4 ) ; * key ( 7 ) ; * table ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: bear ( 1 ) ; * car ( 4 ) ; * key ( 7 ) ; * table ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl beside the bed lended the manager the leaf ., actual:  * girl ( 1 ) ; * bed ( 4 ) ; * manager ( 7 ) ; * leaf ( 9 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ), expected: * girl ( 1 ) ; * bed ( 4 ) ; * manager ( 7 ) ; * leaf ( 9 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A goose in a spaceship gasped ., actual:  goose ( 1 ) ; spaceship ( 4 ) ; nmod . in ( 1 , 4 ) AND gasp ( 5 ) AND agent ( 5 , 4 ), expected: goose ( 1 ) ; spaceship ( 4 ) ; nmod . in ( 1 , 4 ) AND gasp ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A baby on a truck slept ., actual:  baby ( 1 ) ; truck ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: baby ( 1 ) ; truck ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in a hole slept ., actual:  girl ( 1 ) ; hole ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; hole ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: nmod . beside ( 1 , 4 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . beside ( 1 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != nurse ( 8 ) (actual)
mismatched part: nurse ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 11 ) (actual)
mismatched part: theme ( 9 , 11 ) (expected) != theme ( 9 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A cat on a bag cleaned a chemical in a house ., actual:  cat ( 1 ) ; bag ( 4 ) ; chemical ( 7 ) ; house ( 10 ) ; nmod . on ( 1 , 4 ) AND clean ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: cat ( 1 ) ; bag ( 4 ) ; chemical ( 7 ) ; house ( 10 ) ; nmod . on ( 1 , 4 ) AND clean ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: nmod . beside ( 11 , 14 ) (expected) != nmod . beside ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A scientist on the desk admired the cake beside the chair ., actual:  scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A bear beside a chair napped ., actual:  bear ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND nap ( 5 ) AND agent ( 5 , 4 ), expected: bear ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND nap ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A horse on the cake investigated the melon on a box ., actual:  horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: nmod . in ( 11 , 14 ) (expected) != nmod . in ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: nmod . on ( 11 , 14 ) (expected) != nmod . on ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in a cage knew ., actual:  girl ( 1 ) ; cage ( 4 ) ; nmod . in ( 1 , 4 ) AND know ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; cage ( 4 ) ; nmod . in ( 1 , 4 ) AND know ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The monster beside a road smiled ., actual:  * monster ( 1 ) ; road ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: * monster ( 1 ) ; road ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 10 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl in the house liked a cake beside a bed ., actual:  * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A student on a tree sneezed ., actual:  student ( 1 ) ; tree ( 4 ) ; nmod . on ( 1 , 4 ) AND sneeze ( 5 ) AND agent ( 5 , 4 ), expected: student ( 1 ) ; tree ( 4 ) ; nmod . on ( 1 , 4 ) AND sneeze ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in a car ate ., actual:  girl ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A boy in the trailer poked the girl beside a table ., actual:  boy ( 1 ) ; * trailer ( 4 ) ; * girl ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: boy ( 1 ) ; * trailer ( 4 ) ; * girl ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The champion beside a table liked a cake on the computer ., actual:  * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != table ( 7 ) (actual)
mismatched part: table ( 7 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The boy in the vase sent the cake on a table to a cat ., actual:  * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 ), expected: * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: house ( 4 ) (expected) != agent ( 9 , 7 ) (actual)
mismatched part: nmod . beside ( 1 , 4 ) (expected) != house ( 4 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . beside ( 1 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A student in a pot liked the girl on a chair ., actual:  student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A cat on a sofa slept ., actual:  cat ( 1 ) ; sofa ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: cat ( 1 ) ; sofa ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The frog on a mattress ate the radio on the bike ., actual:  * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on a boat cooked ., actual:  girl ( 1 ) ; boat ( 4 ) ; nmod . on ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; boat ( 4 ) ; nmod . on ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The penguin in the drawer rolled the donut beside the computer ., actual:  * penguin ( 1 ) ; * drawer ( 4 ) ; * donut ( 7 ) ; * computer ( 10 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: * penguin ( 1 ) ; * drawer ( 4 ) ; * donut ( 7 ) ; * computer ( 10 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on the surface screamed ., actual:  girl ( 1 ) ; * surface ( 4 ) ; nmod . on ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; * surface ( 4 ) ; nmod . on ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != sleep ( 8 ) (actual)
mismatched part: sleep ( 8 ) (expected) != stage ( 7 ) (actual)
mismatched part: stage ( 7 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 9 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . in ( 7 , 9 ) (actual)
mismatched part: theme ( 8 , 9 ) (expected) != theme ( 8 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl beside a table slept ., actual:  * girl ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: * girl ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The horse on the stack loaned the lollipop on a table to Isaac ., actual:  * horse ( 1 ) ; * stack ( 4 ) ; * lollipop ( 7 ) ; table ( 10 ) ; Isaac ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 ), expected: * horse ( 1 ) ; * stack ( 4 ) ; * lollipop ( 7 ) ; table ( 10 ) ; Isaac ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The teacher on the table gave Liam a cake on the tripod ., actual:  * teacher ( 1 ) ; * table ( 4 ) ; Liam ( 6 ) ; cake ( 8 ) ; * tripod ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 ), expected: * teacher ( 1 ) ; * table ( 4 ) ; Liam ( 6 ) ; cake ( 8 ) ; * tripod ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on a rock smiled ., actual:  girl ( 1 ) ; rock ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; rock ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: nmod . on ( 11 , 14 ) (expected) != nmod . on ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The boy beside a cabinet danced ., actual:  * boy ( 1 ) ; cabinet ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 4 ), expected: * boy ( 1 ) ; cabinet ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 1 )
mismatched part: nmod . in ( 11 , 14 ) (expected) != nmod . in ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl beside the stage found the banana in a bucket ., actual:  * girl ( 1 ) ; * stage ( 4 ) ; * banana ( 7 ) ; bucket ( 10 ) ; nmod . beside ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: * girl ( 1 ) ; * stage ( 4 ) ; * banana ( 7 ) ; bucket ( 10 ) ; nmod . beside ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The frog on the table gave a cake beside the bottle to James ., actual:  * frog ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * bottle ( 10 ) ; James ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 ), expected: * frog ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * bottle ( 10 ) ; James ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
mismatched part: nmod . beside ( 11 , 14 ) (expected) != nmod . beside ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 11 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 11 ) (actual)
mismatched part: theme ( 9 , 11 ) (expected) != theme ( 9 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The dog on the stage ate the boy on a seat ., actual:  * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 9 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . in ( 7 , 9 ) (actual)
mismatched part: theme ( 8 , 9 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . in ( 7 , 9 ) (actual)
mismatched part: theme ( 8 , 9 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 10 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A bird on a train liked a cake beside a box ., actual:  bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl on a booklet walked ., actual:  * girl ( 1 ) ; booklet ( 4 ) ; nmod . on ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 4 ), expected: * girl ( 1 ) ; booklet ( 4 ) ; nmod . on ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: theme ( 6 , 1 ) (expected) != theme ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A squirrel on a computer drew ., actual:  squirrel ( 1 ) ; computer ( 4 ) ; nmod . on ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 4 ), expected: squirrel ( 1 ) ; computer ( 4 ) ; nmod . on ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The horse on a bed slept ., actual:  * horse ( 1 ) ; bed ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: * horse ( 1 ) ; bed ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != sleep ( 8 ) (actual)
mismatched part: sleep ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != see ( 8 ) (actual)
mismatched part: see ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A bear on the seat discovered a boy beside a stage ., actual:  bear ( 1 ) ; * seat ( 4 ) ; boy ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND discover ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: bear ( 1 ) ; * seat ( 4 ) ; boy ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND discover ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The turkey in the storage held a cake beside a table ., actual:  * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl in a box liked the donut beside a stage ., actual:  * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != room ( 4 ) (actual)
mismatched part: room ( 4 ) (expected) != smile ( 8 ) (actual)
mismatched part: smile ( 8 ) (expected) != stage ( 7 ) (actual)
mismatched part: stage ( 7 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: nmod . in ( 11 , 14 ) (expected) != nmod . in ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The chicken on the table poked the child in a cup ., actual:  * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 10 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The frog beside a doll slept ., actual:  * frog ( 1 ) ; doll ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: * frog ( 1 ) ; doll ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . beside ( 7 , 10 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: nmod . on ( 11 , 14 ) (expected) != nmod . on ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The monkey on the futon gave the cat a pretzel ., actual:  * monkey ( 1 ) ; * futon ( 4 ) ; * cat ( 7 ) ; pretzel ( 9 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ), expected: * monkey ( 1 ) ; * futon ( 4 ) ; * cat ( 7 ) ; pretzel ( 9 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A boy in the haystack slept ., actual:  boy ( 1 ) ; * haystack ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: boy ( 1 ) ; * haystack ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A dog on the stage snored ., actual:  dog ( 1 ) ; * stage ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 4 ), expected: dog ( 1 ) ; * stage ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A dog in the wardrobe smiled ., actual:  dog ( 1 ) ; * wardrobe ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: dog ( 1 ) ; * wardrobe ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on the table ate the ball in a cafe ., actual:  girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . in ( 7 , 7 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: nmod . beside ( 11 , 14 ) (expected) != nmod . beside ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: block ( 1 ) (expected) != agent ( 9 , 7 ) (actual)
mismatched part: eat ( 9 ) (expected) != block ( 1 ) (actual)
mismatched part: nmod . beside ( 1 , 4 ) (expected) != eat ( 9 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . beside ( 1 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl in the taxi slept ., actual:  * girl ( 1 ) ; * taxi ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: * girl ( 1 ) ; * taxi ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A child on a table gave Scarlett a balloon beside a lemon ., actual:  child ( 1 ) ; table ( 4 ) ; Scarlett ( 6 ) ; balloon ( 8 ) ; lemon ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 ), expected: child ( 1 ) ; table ( 4 ) ; Scarlett ( 6 ) ; balloon ( 8 ) ; lemon ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: recipient ( 5 , 7 ) (expected) != recipient ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 11 ) (actual)
mismatched part: theme ( 9 , 11 ) (expected) != theme ( 9 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The dog on a chair ate a jigsaw on the paper ., actual:  * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on a table smiled ., actual:  girl ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: nmod . on ( 11 , 14 ) (expected) != nmod . on ( 8 , 14 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A tiger on a bible slept ., actual:  tiger ( 1 ) ; bible ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: tiger ( 1 ) ; bible ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 11 ) (actual)
mismatched part: theme ( 9 , 11 ) (expected) != theme ( 9 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl beside the road cried ., actual:  * girl ( 1 ) ; * road ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 ), expected: * girl ( 1 ) ; * road ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 11 ) (actual)
mismatched part: theme ( 9 , 11 ) (expected) != theme ( 9 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A servant in a car heard ., actual:  servant ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND hear ( 5 ) AND agent ( 5 , 4 ), expected: servant ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND hear ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The cat on the canvas gave the glue beside a table to a girl ., actual:  * cat ( 1 ) ; * canvas ( 4 ) ; * glue ( 7 ) ; table ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 ), expected: * cat ( 1 ) ; * canvas ( 4 ) ; * glue ( 7 ) ; table ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl beside the table saw the cat in a car ., actual:  girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The cat on the tabletop sold the princess a cake beside a monkey ., actual:  * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 ), expected: * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl beside a sword ate a fruit in the house ., actual:  girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: nmod . beside ( 1 , 4 ) (expected) != nmod . beside ( 1 , 5 ) (actual)
mismatched part: crumple ( 8 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: house ( 7 ) (expected) != crumple ( 8 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != house ( 7 ) (actual)
mismatched part: nmod . on ( 11 , 14 ) (expected) != nmod . on ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on the rock walked ., actual:  girl ( 1 ) ; * rock ( 4 ) ; nmod . on ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; * rock ( 4 ) ; nmod . on ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A mouse in a container drew ., actual:  mouse ( 1 ) ; container ( 4 ) ; nmod . in ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 4 ), expected: mouse ( 1 ) ; container ( 4 ) ; nmod . in ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . beside ( 7 , 10 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: recipient ( 5 , 7 ) (expected) != recipient ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in the room cried ., actual:  girl ( 1 ) ; * room ( 4 ) ; nmod . in ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; * room ( 4 ) ; nmod . in ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The mouse in the crate liked a professor on the road ., actual:  * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != sleep ( 8 ) (actual)
mismatched part: sleep ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl in a house scoffed ., actual:  * girl ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 4 ), expected: * girl ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl on a tray served the cat a cake ., actual:  * girl ( 1 ) ; tray ( 4 ) ; * cat ( 7 ) ; cake ( 9 ) ; nmod . on ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ), expected: * girl ( 1 ) ; tray ( 4 ) ; * cat ( 7 ) ; cake ( 9 ) ; nmod . on ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A puppy in the car juggled ., actual:  puppy ( 1 ) ; * car ( 4 ) ; nmod . in ( 1 , 4 ) AND juggle ( 5 ) AND agent ( 5 , 4 ), expected: puppy ( 1 ) ; * car ( 4 ) ; nmod . in ( 1 , 4 ) AND juggle ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 9 ) (actual)
mismatched part: theme ( 8 , 11 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: cot ( 7 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != cot ( 7 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in the car liked a bottle in the house ., actual:  girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != road ( 4 ) (actual)
mismatched part: road ( 4 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: recipient ( 5 , 7 ) (expected) != recipient ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != poke ( 9 ) (actual)
mismatched part: poke ( 9 ) (expected) != recipient ( 9 , 7 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: recipient ( 5 , 7 ) (expected) != recipient ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: house ( 7 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 1 , 4 ) (expected) != house ( 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . beside ( 1 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The dog on a table snored ., actual:  * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 4 ), expected: * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on the surface cried ., actual:  girl ( 1 ) ; * surface ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; * surface ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )
mismatched part: freeze ( 8 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: girl ( 1 ) (expected) != freeze ( 8 ) (actual)
mismatched part: house ( 4 ) (expected) != girl ( 1 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != house ( 4 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A frog in a bag sketched ., actual:  frog ( 1 ) ; bag ( 4 ) ; nmod . in ( 1 , 4 ) AND sketch ( 5 ) AND agent ( 5 , 4 ), expected: frog ( 1 ) ; bag ( 4 ) ; nmod . in ( 1 , 4 ) AND sketch ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The boy beside the whale slept ., actual:  * boy ( 1 ) ; * whale ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: * boy ( 1 ) ; * whale ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The consumer on the bed gave Evelyn a molecule beside the duck ., actual:  * consumer ( 1 ) ; * bed ( 4 ) ; Evelyn ( 6 ) ; molecule ( 8 ) ; * duck ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 ), expected: * consumer ( 1 ) ; * bed ( 4 ) ; Evelyn ( 6 ) ; molecule ( 8 ) ; * duck ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The lion beside a piano gave the girl the donut ., actual:  * lion ( 1 ) ; piano ( 4 ) ; * girl ( 7 ) ; * donut ( 9 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ), expected: * lion ( 1 ) ; piano ( 4 ) ; * girl ( 7 ) ; * donut ( 9 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on the panel drew ., actual:  girl ( 1 ) ; * panel ( 4 ) ; nmod . on ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; * panel ( 4 ) ; nmod . on ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 11 ) (actual)
mismatched part: theme ( 9 , 11 ) (expected) != theme ( 9 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A boy beside a chair laughed ., actual:  boy ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 4 ), expected: boy ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 10 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 10 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The child beside a chair ate the rose beside a shoe ., actual:  * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on a surface sketched ., actual:  girl ( 1 ) ; surface ( 4 ) ; nmod . on ( 1 , 4 ) AND sketch ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; surface ( 4 ) ; nmod . on ( 1 , 4 ) AND sketch ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: nmod . on ( 11 , 14 ) (expected) != nmod . on ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != recipient ( 9 , 7 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The dog on a table scoffed ., actual:  * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 4 ), expected: * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A boy on a bed sent the cat a donut ., actual:  boy ( 1 ) ; bed ( 4 ) ; * cat ( 7 ) ; donut ( 9 ) ; nmod . on ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ), expected: boy ( 1 ) ; bed ( 4 ) ; * cat ( 7 ) ; donut ( 9 ) ; nmod . on ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A teacher on the table cried ., actual:  teacher ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 ), expected: teacher ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A teacher beside a table danced ., actual:  teacher ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 4 ), expected: teacher ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A boy on the surface gave the girl a bell ., actual:  boy ( 1 ) ; * surface ( 4 ) ; * girl ( 7 ) ; bell ( 9 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ), expected: boy ( 1 ) ; * surface ( 4 ) ; * girl ( 7 ) ; bell ( 9 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The kid on a trampoline slept ., actual:  * kid ( 1 ) ; trampoline ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: * kid ( 1 ) ; trampoline ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . beside ( 10 , 13 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A child in a car smiled ., actual:  child ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: child ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl in the tub lended Emma the cake ., actual:  * girl ( 1 ) ; * tub ( 4 ) ; Emma ( 6 ) ; * cake ( 8 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ), expected: * girl ( 1 ) ; * tub ( 4 ) ; Emma ( 6 ) ; * cake ( 8 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl beside a stage cooked a cake in the shoe ., actual:  girl ( 1 ) ; stage ( 4 ) ; cake ( 7 ) ; * shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: girl ( 1 ) ; stage ( 4 ) ; cake ( 7 ) ; * shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The mouse on a table gave the donut in the nest to a cat ., actual:  * mouse ( 1 ) ; table ( 4 ) ; * donut ( 7 ) ; * nest ( 10 ) ; cat ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 ), expected: * mouse ( 1 ) ; table ( 4 ) ; * donut ( 7 ) ; * nest ( 10 ) ; cat ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on the chair slept ., actual:  girl ( 1 ) ; * chair ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; * chair ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A queen on a device screamed ., actual:  queen ( 1 ) ; device ( 4 ) ; nmod . on ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 4 ), expected: queen ( 1 ) ; device ( 4 ) ; nmod . on ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A buyer beside the table rolled the cake in the backpack ., actual:  buyer ( 1 ) ; * table ( 4 ) ; * cake ( 7 ) ; * backpack ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: buyer ( 1 ) ; * table ( 4 ) ; * cake ( 7 ) ; * backpack ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A fish on a leaflet loaned the cat the donut beside the stage ., actual:  fish ( 1 ) ; leaflet ( 4 ) ; * cat ( 7 ) ; * donut ( 9 ) ; * stage ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 ), expected: fish ( 1 ) ; leaflet ( 4 ) ; * cat ( 7 ) ; * donut ( 9 ) ; * stage ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A priest on the box admired a cake on the table ., actual:  priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . beside ( 7 , 10 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on a stool observed ., actual:  girl ( 1 ) ; stool ( 4 ) ; nmod . on ( 1 , 4 ) AND observe ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; stool ( 4 ) ; nmod . on ( 1 , 4 ) AND observe ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on the dog handed a cat the raisin on a table ., actual:  girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 ), expected: girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The boy on a towel gave the frog the cake on a booklet ., actual:  * boy ( 1 ) ; towel ( 4 ) ; * frog ( 7 ) ; * cake ( 9 ) ; booklet ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 ), expected: * boy ( 1 ) ; towel ( 4 ) ; * frog ( 7 ) ; * cake ( 9 ) ; booklet ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != stage ( 7 ) (actual)
mismatched part: stage ( 7 ) (expected) != stutter ( 8 ) (actual)
mismatched part: stutter ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A director in a house walked ., actual:  director ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 4 ), expected: director ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 7 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A host beside a table smiled ., actual:  host ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: host ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 9 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . beside ( 7 , 9 ) (actual)
mismatched part: theme ( 8 , 9 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The baby on the stage gave the girl a cake ., actual:  * baby ( 1 ) ; * stage ( 4 ) ; * girl ( 7 ) ; cake ( 9 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ), expected: * baby ( 1 ) ; * stage ( 4 ) ; * girl ( 7 ) ; cake ( 9 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
defaultdict(<class 'int'>, {'multiple,agent=left,theme=None': 24, 'multiple,agent=right or middle,theme=middle': 27, 'multiple,agent=left,theme=right': 144, 'diff_length_skip,agent=right or middle,theme=None': 67, 'diff_length_skip,agent=right or middle,theme=left': 85, 'more_than_one_verb_not_v_inf_skip': 73, 'agent=left,theme=None,part=agent': 64, 'multiple,agent=right or middle,theme=None': 112, 'agent=left,theme=right,part=agent': 27, 'cp_skip': 77, 'diff_length_skip,agent=left,theme=None': 48, 'multiple,agent=left,theme=middle': 28, 'agent=right or middle,theme=None,part=recipient': 6, 'agent=left,theme=middle,part=agent': 5, 'multiple,agent=right or middle,theme=left': 3, 'agent=right or middle,theme=middle,part=other': 3, 'agent=right or middle,theme=left,part=theme': 1, 'agent=right or middle,theme=middle,part=recipient': 1, 'agent=right or middle,theme=left,part=other': 2, 'agent=left,theme=right,part=other': 1})
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != scream ( 8 ) (actual)
mismatched part: scream ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: recipient ( 5 , 6 ) (expected) != recipient ( 5 , 8 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: recipient ( 5 , 6 ) (expected) != recipient ( 5 , 8 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != sneeze ( 8 ) (actual)
mismatched part: sneeze ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 10 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 13 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: recipient ( 5 , 7 ) (expected) != recipient ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: nmod . beside ( 1 , 4 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . beside ( 1 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != nurse ( 8 ) (actual)
mismatched part: nurse ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 7 ) (actual)
mismatched part: theme ( 9 , 11 ) (expected) != theme ( 9 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != appreciate ( 5 ) (actual)
mismatched part: appreciate ( 5 ) (expected) != nmod . beside ( 1 , 4 ) (actual)
mismatched part: nmod . beside ( 1 , 4 ) (expected) != nmod . in ( 7 , 10 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 6 , 11 ) (expected) != agent ( 6 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A scientist on the desk admired the cake beside the chair ., actual:  scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A bear beside a chair napped ., actual:  bear ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND nap ( 5 ) AND agent ( 5 , 4 ), expected: bear ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND nap ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . in ( 7 , 10 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != table ( 7 ) (actual)
mismatched part: table ( 7 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The boy in the vase sent the cake on a table to a cat ., actual:  * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 ), expected: * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The politician beside the book cried ., actual:  * politician ( 1 ) ; * book ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 ), expected: * politician ( 1 ) ; * book ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != sleep ( 8 ) (actual)
mismatched part: sleep ( 8 ) (expected) != stage ( 7 ) (actual)
mismatched part: stage ( 7 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 9 ) (actual)
mismatched part: theme ( 8 , 9 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The horse on the stack loaned the lollipop on a table to Isaac ., actual:  * horse ( 1 ) ; * stack ( 4 ) ; * lollipop ( 7 ) ; table ( 10 ) ; Isaac ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 ), expected: * horse ( 1 ) ; * stack ( 4 ) ; * lollipop ( 7 ) ; table ( 10 ) ; Isaac ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on a rock smiled ., actual:  girl ( 1 ) ; rock ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; rock ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 5 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The boy beside a cabinet danced ., actual:  * boy ( 1 ) ; cabinet ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 4 ), expected: * boy ( 1 ) ; cabinet ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 10 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 7 ) (actual)
mismatched part: theme ( 9 , 11 ) (expected) != theme ( 9 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: theme ( 8 , 12 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . in ( 4 , 9 ) (actual)
mismatched part: theme ( 8 , 9 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: theme ( 8 , 9 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 10 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The dog beside the table cried ., actual:  * dog ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 ), expected: * dog ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 10 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The prince in a bin smiled ., actual:  * prince ( 1 ) ; bin ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: * prince ( 1 ) ; bin ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 6 , 12 ) (expected) != agent ( 6 , 15 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 10 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != sleep ( 8 ) (actual)
mismatched part: sleep ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != see ( 8 ) (actual)
mismatched part: see ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 10 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The turkey in the storage held a cake beside a table ., actual:  * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: recipient ( 5 , 6 ) (expected) != recipient ( 5 , 8 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != donut ( 7 ) (actual)
mismatched part: donut ( 7 ) (expected) != girl ( 1 ) (actual)
mismatched part: girl ( 1 ) (expected) != nmod . beside ( 7 , 10 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != room ( 4 ) (actual)
mismatched part: room ( 4 ) (expected) != smile ( 8 ) (actual)
mismatched part: smile ( 8 ) (expected) != stage ( 7 ) (actual)
mismatched part: stage ( 7 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 10 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 10 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A dog on the stage snored ., actual:  dog ( 1 ) ; * stage ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 4 ), expected: dog ( 1 ) ; * stage ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != game ( 7 ) (actual)
mismatched part: game ( 7 ) (expected) != girl ( 1 ) (actual)
mismatched part: girl ( 1 ) (expected) != nmod . beside ( 1 , 4 ) (actual)
mismatched part: nmod . beside ( 1 , 4 ) (expected) != nmod . on ( 7 , 10 ) (actual)
mismatched part: nmod . on ( 7 , 10 ) (expected) != nmod . on ( 7 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: recipient ( 5 , 7 ) (expected) != recipient ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 11 ) (actual)
mismatched part: theme ( 9 , 11 ) (expected) != theme ( 9 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on a table smiled ., actual:  girl ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 7 ) (actual)
mismatched part: theme ( 9 , 11 ) (expected) != theme ( 9 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: recipient ( 5 , 6 ) (expected) != recipient ( 5 , 8 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl beside the road cried ., actual:  * girl ( 1 ) ; * road ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 ), expected: * girl ( 1 ) ; * road ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 7 ) (actual)
mismatched part: theme ( 9 , 11 ) (expected) != theme ( 9 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: recipient ( 5 , 6 ) (expected) != recipient ( 5 , 8 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The cat on the tabletop sold the princess a cake beside a monkey ., actual:  * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 ), expected: * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: recipient ( 5 , 6 ) (expected) != recipient ( 5 , 8 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: crumple ( 8 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: house ( 7 ) (expected) != crumple ( 8 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != house ( 7 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 10 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: recipient ( 5 , 7 ) (expected) != recipient ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The mouse in the crate liked a professor on the road ., actual:  * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != sleep ( 8 ) (actual)
mismatched part: sleep ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 9 ) (actual)
mismatched part: theme ( 8 , 11 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: cot ( 7 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != cot ( 7 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in the car liked a bottle in the house ., actual:  girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: recipient ( 5 , 6 ) (expected) != recipient ( 5 , 8 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != road ( 4 ) (actual)
mismatched part: road ( 4 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: recipient ( 5 , 7 ) (expected) != recipient ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: recipient ( 5 , 7 ) (expected) != recipient ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: house ( 7 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 1 , 4 ) (expected) != house ( 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . beside ( 1 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: freeze ( 8 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: girl ( 1 ) (expected) != freeze ( 8 ) (actual)
mismatched part: house ( 4 ) (expected) != girl ( 1 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != house ( 4 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: recipient ( 5 , 7 ) (expected) != recipient ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 7 ) (actual)
mismatched part: theme ( 9 , 11 ) (expected) != theme ( 9 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A boy beside a chair laughed ., actual:  boy ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 4 ), expected: boy ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 10 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 10 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: recipient ( 5 , 7 ) (expected) != recipient ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: recipient ( 5 , 7 ) (expected) != recipient ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: recipient ( 5 , 7 ) (expected) != recipient ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A child in a car smiled ., actual:  child ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: child ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 12 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: theme ( 8 , 12 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 10 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A fish on a leaflet loaned the cat the donut beside the stage ., actual:  fish ( 1 ) ; leaflet ( 4 ) ; * cat ( 7 ) ; * donut ( 9 ) ; * stage ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 ), expected: fish ( 1 ) ; leaflet ( 4 ) ; * cat ( 7 ) ; * donut ( 9 ) ; * stage ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: theme ( 8 , 12 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . beside ( 7 , 10 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on the dog handed a cat the raisin on a table ., actual:  girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 ), expected: girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != stage ( 7 ) (actual)
mismatched part: stage ( 7 ) (expected) != stutter ( 8 ) (actual)
mismatched part: stutter ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 7 ) (actual)
mismatched part: theme ( 9 , 11 ) (expected) != theme ( 9 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A host beside a table smiled ., actual:  host ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: host ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 10 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 9 ) (actual)
mismatched part: theme ( 8 , 9 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: recipient ( 5 , 7 ) (expected) != recipient ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
defaultdict(<class 'int'>, {'multiple,agent=left,theme=None': 55, 'multiple,agent=right or middle,theme=middle': 34, 'multiple,agent=left,theme=right': 169, 'diff_length_skip,agent=right or middle,theme=None': 68, 'diff_length_skip,agent=right or middle,theme=left': 70, 'multiple,agent=left,theme=middle': 45, 'more_than_one_verb_not_v_inf_skip': 71, 'multiple,agent=right or middle,theme=None': 121, 'cp_skip': 77, 'diff_length_skip,agent=left,theme=None': 82, 'agent=left,theme=right,part=agent': 4, 'agent=left,theme=None,part=agent': 15, 'agent=left,theme=middle,part=agent': 2, 'diff_length_skip,agent=left,theme=right': 3, 'agent=right or middle,theme=left,part=agent': 1, 'agent=right or middle,theme=None,part=recipient': 2})
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != scream ( 8 ) (actual)
mismatched part: scream ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A donkey in the room sold Ella a donut ., actual:  donkey ( 1 ) ; * room ( 4 ) ; Ella ( 6 ) ; donut ( 8 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ), expected: donkey ( 1 ) ; * room ( 4 ) ; Ella ( 6 ) ; donut ( 8 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )
mismatched part: nmod . beside ( 11 , 14 ) (expected) != nmod . beside ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
example agent error substitute nmod instead - input: The dog in a bakery in the bag sneezed ., actual:  * dog ( 1 ) ; bakery ( 4 ) ; * bag ( 7 ) ; nmod . in ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND sneeze ( 8 ) AND agent ( 8 , 7 ), expected: * dog ( 1 ) ; bakery ( 4 ) ; * bag ( 7 ) ; nmod . in ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND sneeze ( 8 ) AND agent ( 8 , 1 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != nmod . on ( 7 , 10 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The sailor in a house lended a biscuit on a table to a goose ., actual:  * sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 ), expected: * sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A bear in the car froze the key on the table ., actual:  bear ( 1 ) ; * car ( 4 ) ; * key ( 7 ) ; * table ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: bear ( 1 ) ; * car ( 4 ) ; * key ( 7 ) ; * table ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl beside the bed lended the manager the leaf ., actual:  * girl ( 1 ) ; * bed ( 4 ) ; * manager ( 7 ) ; * leaf ( 9 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ), expected: * girl ( 1 ) ; * bed ( 4 ) ; * manager ( 7 ) ; * leaf ( 9 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A mouse beside the table ate ., actual:  mouse ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ), expected: mouse ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The baby beside a valve painted the cake ., actual:  * baby ( 1 ) ; valve ( 4 ) ; * cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: * baby ( 1 ) ; valve ( 4 ) ; * cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: nmod . beside ( 1 , 4 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . beside ( 1 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != recipient ( 9 , 7 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A baby in a garden called the raisin ., actual:  baby ( 1 ) ; garden ( 4 ) ; * raisin ( 7 ) ; nmod . in ( 1 , 4 ) AND call ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: baby ( 1 ) ; garden ( 4 ) ; * raisin ( 7 ) ; nmod . in ( 1 , 4 ) AND call ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in the house knew a cake ., actual:  girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; nmod . in ( 1 , 4 ) AND know ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; nmod . in ( 1 , 4 ) AND know ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != nurse ( 8 ) (actual)
mismatched part: nurse ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The child in a drawer gave Amelia a box beside the machine ., actual:  * child ( 1 ) ; drawer ( 4 ) ; Amelia ( 6 ) ; box ( 8 ) ; * machine ( 11 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 ), expected: * child ( 1 ) ; drawer ( 4 ) ; Amelia ( 6 ) ; box ( 8 ) ; * machine ( 11 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A cat on a bag cleaned a chemical in a house ., actual:  cat ( 1 ) ; bag ( 4 ) ; chemical ( 7 ) ; house ( 10 ) ; nmod . on ( 1 , 4 ) AND clean ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: cat ( 1 ) ; bag ( 4 ) ; chemical ( 7 ) ; house ( 10 ) ; nmod . on ( 1 , 4 ) AND clean ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A frog beside the table cried ., actual:  frog ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 ), expected: frog ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A boy on the bed sketched ., actual:  boy ( 1 ) ; * bed ( 4 ) ; nmod . on ( 1 , 4 ) AND sketch ( 5 ) AND agent ( 5 , 4 ), expected: boy ( 1 ) ; * bed ( 4 ) ; nmod . on ( 1 , 4 ) AND sketch ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: nmod . beside ( 11 , 14 ) (expected) != nmod . beside ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl beside a rock passed Dylan a pen on a box ., actual:  girl ( 1 ) ; rock ( 4 ) ; Dylan ( 6 ) ; pen ( 8 ) ; box ( 11 ) ; nmod . beside ( 1 , 4 ) AND pass ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 ), expected: girl ( 1 ) ; rock ( 4 ) ; Dylan ( 6 ) ; pen ( 8 ) ; box ( 11 ) ; nmod . beside ( 1 , 4 ) AND pass ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A driver beside the bed smiled ., actual:  driver ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: driver ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A scientist on the desk admired the cake beside the chair ., actual:  scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A horse on the cake investigated the melon on a box ., actual:  horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The researcher in a room ate the baby ., actual:  * researcher ( 1 ) ; room ( 4 ) ; * baby ( 7 ) ; nmod . in ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: * researcher ( 1 ) ; room ( 4 ) ; * baby ( 7 ) ; nmod . in ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The monster beside a road smiled ., actual:  * monster ( 1 ) ; road ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: * monster ( 1 ) ; road ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . in ( 7 , 10 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl in the house liked a cake beside a bed ., actual:  * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: nmod . beside ( 11 , 14 ) (expected) != nmod . beside ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in the house forwarded Victoria a gumball in the shoe ., actual:  girl ( 1 ) ; * house ( 4 ) ; Victoria ( 6 ) ; gumball ( 8 ) ; * shoe ( 11 ) ; nmod . in ( 1 , 4 ) AND forward ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . in ( 8 , 11 ), expected: girl ( 1 ) ; * house ( 4 ) ; Victoria ( 6 ) ; gumball ( 8 ) ; * shoe ( 11 ) ; nmod . in ( 1 , 4 ) AND forward ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . in ( 8 , 11 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A boy in the trailer poked the girl beside a table ., actual:  boy ( 1 ) ; * trailer ( 4 ) ; * girl ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: boy ( 1 ) ; * trailer ( 4 ) ; * girl ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The champion beside a table liked a cake on the computer ., actual:  * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != table ( 7 ) (actual)
mismatched part: table ( 7 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The boy in the vase sent the cake on a table to a cat ., actual:  * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 ), expected: * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != paint ( 9 ) (actual)
mismatched part: paint ( 9 ) (expected) != pumpkin ( 1 ) (actual)
mismatched part: pumpkin ( 1 ) (expected) != recipient ( 9 , 7 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The child on the pad ate the cat ., actual:  * child ( 1 ) ; * pad ( 4 ) ; * cat ( 7 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: * child ( 1 ) ; * pad ( 4 ) ; * cat ( 7 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A student in a pot liked the girl on a chair ., actual:  student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A teacher beside the table burned the producer on the road ., actual:  teacher ( 1 ) ; * table ( 4 ) ; * producer ( 7 ) ; * road ( 10 ) ; nmod . beside ( 1 , 4 ) AND burn ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: teacher ( 1 ) ; * table ( 4 ) ; * producer ( 7 ) ; * road ( 10 ) ; nmod . beside ( 1 , 4 ) AND burn ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The wolf in the house offered the donut on the dish to Sophia ., actual:  * wolf ( 1 ) ; * house ( 4 ) ; * donut ( 7 ) ; * dish ( 10 ) ; Sophia ( 12 ) ; nmod . in ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 ), expected: * wolf ( 1 ) ; * house ( 4 ) ; * donut ( 7 ) ; * dish ( 10 ) ; Sophia ( 12 ) ; nmod . in ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The frog on a mattress ate the radio on the bike ., actual:  * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A horse in the van ate ., actual:  horse ( 1 ) ; * van ( 4 ) ; nmod . in ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ), expected: horse ( 1 ) ; * van ( 4 ) ; nmod . in ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The penguin in the drawer rolled the donut beside the computer ., actual:  * penguin ( 1 ) ; * drawer ( 4 ) ; * donut ( 7 ) ; * computer ( 10 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: * penguin ( 1 ) ; * drawer ( 4 ) ; * donut ( 7 ) ; * computer ( 10 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A dog beside the seat screamed ., actual:  dog ( 1 ) ; * seat ( 4 ) ; nmod . beside ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 4 ), expected: dog ( 1 ) ; * seat ( 4 ) ; nmod . beside ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The frog on a cot dusted a cookie ., actual:  * frog ( 1 ) ; cot ( 4 ) ; cookie ( 7 ) ; nmod . on ( 1 , 4 ) AND dust ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: * frog ( 1 ) ; cot ( 4 ) ; cookie ( 7 ) ; nmod . on ( 1 , 4 ) AND dust ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The cat in a house adored the donut on a stage ., actual:  * cat ( 1 ) ; house ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * cat ( 1 ) ; house ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: nmod . beside ( 11 , 14 ) (expected) != nmod . beside ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on the surface screamed ., actual:  girl ( 1 ) ; * surface ( 4 ) ; nmod . on ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; * surface ( 4 ) ; nmod . on ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != sleep ( 8 ) (actual)
mismatched part: sleep ( 8 ) (expected) != stage ( 7 ) (actual)
mismatched part: stage ( 7 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . in ( 7 , 9 ) (actual)
mismatched part: theme ( 8 , 9 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The horse on the stack loaned the lollipop on a table to Isaac ., actual:  * horse ( 1 ) ; * stack ( 4 ) ; * lollipop ( 7 ) ; table ( 10 ) ; Isaac ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 ), expected: * horse ( 1 ) ; * stack ( 4 ) ; * lollipop ( 7 ) ; table ( 10 ) ; Isaac ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The cat on the table awarded a cake on the stand to Oliver ., actual:  * cat ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * stand ( 10 ) ; Oliver ( 12 ) ; nmod . on ( 1 , 4 ) AND award ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 ), expected: * cat ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * stand ( 10 ) ; Oliver ( 12 ) ; nmod . on ( 1 , 4 ) AND award ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The cat in a house studied a boy ., actual:  * cat ( 1 ) ; house ( 4 ) ; boy ( 7 ) ; nmod . in ( 1 , 4 ) AND study ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: * cat ( 1 ) ; house ( 4 ) ; boy ( 7 ) ; nmod . in ( 1 , 4 ) AND study ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The teacher on the table gave Liam a cake on the tripod ., actual:  * teacher ( 1 ) ; * table ( 4 ) ; Liam ( 6 ) ; cake ( 8 ) ; * tripod ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 ), expected: * teacher ( 1 ) ; * table ( 4 ) ; Liam ( 6 ) ; cake ( 8 ) ; * tripod ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The deer in a house hunted a melon ., actual:  * deer ( 1 ) ; house ( 4 ) ; melon ( 7 ) ; nmod . in ( 1 , 4 ) AND hunt ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: * deer ( 1 ) ; house ( 4 ) ; melon ( 7 ) ; nmod . in ( 1 , 4 ) AND hunt ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: nmod . on ( 11 , 14 ) (expected) != nmod . on ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: nmod . beside ( 11 , 14 ) (expected) != nmod . beside ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The boy beside a cabinet danced ., actual:  * boy ( 1 ) ; cabinet ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 4 ), expected: * boy ( 1 ) ; cabinet ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 1 )
mismatched part: nmod . in ( 11 , 14 ) (expected) != nmod . in ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl beside the stage found the banana in a bucket ., actual:  * girl ( 1 ) ; * stage ( 4 ) ; * banana ( 7 ) ; bucket ( 10 ) ; nmod . beside ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: * girl ( 1 ) ; * stage ( 4 ) ; * banana ( 7 ) ; bucket ( 10 ) ; nmod . beside ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The fish beside the seat offered the hamburger beside a key to a frog ., actual:  * fish ( 1 ) ; * seat ( 4 ) ; * hamburger ( 7 ) ; key ( 10 ) ; frog ( 13 ) ; nmod . beside ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 ), expected: * fish ( 1 ) ; * seat ( 4 ) ; * hamburger ( 7 ) ; key ( 10 ) ; frog ( 13 ) ; nmod . beside ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The frog on the table gave a cake beside the bottle to James ., actual:  * frog ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * bottle ( 10 ) ; James ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 ), expected: * frog ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * bottle ( 10 ) ; James ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
mismatched part: nmod . beside ( 11 , 14 ) (expected) != nmod . beside ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A boy on a plate sketched a chicken ., actual:  boy ( 1 ) ; plate ( 4 ) ; chicken ( 7 ) ; nmod . on ( 1 , 4 ) AND sketch ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: boy ( 1 ) ; plate ( 4 ) ; chicken ( 7 ) ; nmod . on ( 1 , 4 ) AND sketch ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A boy in the house lended the mouse the cake beside a seat ., actual:  boy ( 1 ) ; * house ( 4 ) ; * mouse ( 7 ) ; * cake ( 9 ) ; seat ( 12 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 ), expected: boy ( 1 ) ; * house ( 4 ) ; * mouse ( 7 ) ; * cake ( 9 ) ; seat ( 12 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: nmod . on ( 11 , 14 ) (expected) != nmod . on ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 11 ) (actual)
mismatched part: theme ( 9 , 11 ) (expected) != theme ( 9 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The dog on the stage ate the boy on a seat ., actual:  * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . beside ( 7 , 9 ) (actual)
mismatched part: theme ( 8 , 9 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . in ( 7 , 9 ) (actual)
mismatched part: theme ( 8 , 9 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The dog in a garden ate ., actual:  * dog ( 1 ) ; garden ( 4 ) ; nmod . in ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ), expected: * dog ( 1 ) ; garden ( 4 ) ; nmod . in ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A bird on a train liked a cake beside a box ., actual:  bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The dog beside the table cried ., actual:  * dog ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 ), expected: * dog ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The cat on a boat gave the box on a table to a boy ., actual:  * cat ( 1 ) ; boat ( 4 ) ; * box ( 7 ) ; table ( 10 ) ; boy ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 ), expected: * cat ( 1 ) ; boat ( 4 ) ; * box ( 7 ) ; table ( 10 ) ; boy ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in a room sent a frog a cake beside the pillar ., actual:  girl ( 1 ) ; room ( 4 ) ; frog ( 7 ) ; cake ( 9 ) ; * pillar ( 12 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 ), expected: girl ( 1 ) ; room ( 4 ) ; frog ( 7 ) ; cake ( 9 ) ; * pillar ( 12 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl on a tree offered the boy the banana beside a table ., actual:  * girl ( 1 ) ; tree ( 4 ) ; * boy ( 7 ) ; * banana ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 ), expected: * girl ( 1 ) ; tree ( 4 ) ; * boy ( 7 ) ; * banana ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The prince in a bin smiled ., actual:  * prince ( 1 ) ; bin ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: * prince ( 1 ) ; bin ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A cat in a bag found a book in the well ., actual:  cat ( 1 ) ; bag ( 4 ) ; book ( 7 ) ; * well ( 10 ) ; nmod . in ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: cat ( 1 ) ; bag ( 4 ) ; book ( 7 ) ; * well ( 10 ) ; nmod . in ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl beside a stage lended the cake in the house to Liam ., actual:  * girl ( 1 ) ; stage ( 4 ) ; * cake ( 7 ) ; * house ( 10 ) ; Liam ( 12 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . in ( 7 , 10 ), expected: * girl ( 1 ) ; stage ( 4 ) ; * cake ( 7 ) ; * house ( 10 ) ; Liam ( 12 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
example agent error substitute nmod instead - input: The girl beside the tree in the bookstore slept ., actual:  * girl ( 1 ) ; * tree ( 4 ) ; * bookstore ( 7 ) ; nmod . beside ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND sleep ( 8 ) AND agent ( 8 , 7 ), expected: * girl ( 1 ) ; * tree ( 4 ) ; * bookstore ( 7 ) ; nmod . beside ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND sleep ( 8 ) AND agent ( 8 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != see ( 8 ) (actual)
mismatched part: see ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A bear on the seat discovered a boy beside a stage ., actual:  bear ( 1 ) ; * seat ( 4 ) ; boy ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND discover ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: bear ( 1 ) ; * seat ( 4 ) ; boy ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND discover ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A cat on the bed decomposed the cake in the cylinder ., actual:  cat ( 1 ) ; * bed ( 4 ) ; * cake ( 7 ) ; * cylinder ( 10 ) ; nmod . on ( 1 , 4 ) AND decompose ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: cat ( 1 ) ; * bed ( 4 ) ; * cake ( 7 ) ; * cylinder ( 10 ) ; nmod . on ( 1 , 4 ) AND decompose ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: nmod . on ( 11 , 14 ) (expected) != nmod . on ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl beside a boat drew a soap ., actual:  girl ( 1 ) ; boat ( 4 ) ; soap ( 7 ) ; nmod . beside ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: girl ( 1 ) ; boat ( 4 ) ; soap ( 7 ) ; nmod . beside ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The turkey in the storage held a cake beside a table ., actual:  * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl in a box liked the donut beside a stage ., actual:  * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != room ( 4 ) (actual)
mismatched part: room ( 4 ) (expected) != smile ( 8 ) (actual)
mismatched part: smile ( 8 ) (expected) != stage ( 7 ) (actual)
mismatched part: stage ( 7 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: nmod . in ( 11 , 14 ) (expected) != nmod . in ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The chicken on the table poked the child in a cup ., actual:  * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A child beside the table rolled the student in the tin ., actual:  child ( 1 ) ; * table ( 4 ) ; * student ( 7 ) ; * tin ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: child ( 1 ) ; * table ( 4 ) ; * student ( 7 ) ; * tin ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . in ( 7 , 10 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in a envelope sold Liam the cake beside the computer ., actual:  girl ( 1 ) ; envelope ( 4 ) ; Liam ( 6 ) ; * cake ( 8 ) ; * computer ( 11 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 ), expected: girl ( 1 ) ; envelope ( 4 ) ; Liam ( 6 ) ; * cake ( 8 ) ; * computer ( 11 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 10 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: nmod . on ( 11 , 14 ) (expected) != nmod . on ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl beside the table gave a mouse a mirror in the saucepan ., actual:  girl ( 1 ) ; * table ( 4 ) ; mouse ( 7 ) ; mirror ( 9 ) ; * saucepan ( 12 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . in ( 9 , 12 ), expected: girl ( 1 ) ; * table ( 4 ) ; mouse ( 7 ) ; mirror ( 9 ) ; * saucepan ( 12 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . in ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A dog on the stage snored ., actual:  dog ( 1 ) ; * stage ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 4 ), expected: dog ( 1 ) ; * stage ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A dog in the wardrobe smiled ., actual:  dog ( 1 ) ; * wardrobe ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: dog ( 1 ) ; * wardrobe ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on the table ate the ball in a cafe ., actual:  girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: nmod . beside ( 11 , 14 ) (expected) != nmod . beside ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != recipient ( 9 , 7 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A boy beside the seat drew ., actual:  boy ( 1 ) ; * seat ( 4 ) ; nmod . beside ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 4 ), expected: boy ( 1 ) ; * seat ( 4 ) ; nmod . beside ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A child on a table gave Scarlett a balloon beside a lemon ., actual:  child ( 1 ) ; table ( 4 ) ; Scarlett ( 6 ) ; balloon ( 8 ) ; lemon ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 ), expected: child ( 1 ) ; table ( 4 ) ; Scarlett ( 6 ) ; balloon ( 8 ) ; lemon ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A boy on the stage observed the donut ., actual:  boy ( 1 ) ; * stage ( 4 ) ; * donut ( 7 ) ; nmod . on ( 1 , 4 ) AND observe ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: boy ( 1 ) ; * stage ( 4 ) ; * donut ( 7 ) ; nmod . on ( 1 , 4 ) AND observe ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: recipient ( 5 , 7 ) (expected) != recipient ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 11 ) (actual)
mismatched part: theme ( 9 , 11 ) (expected) != theme ( 9 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: nmod . beside ( 11 , 14 ) (expected) != nmod . beside ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A boy on the stage nursed a cookie ., actual:  boy ( 1 ) ; * stage ( 4 ) ; cookie ( 7 ) ; nmod . on ( 1 , 4 ) AND nurse ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: boy ( 1 ) ; * stage ( 4 ) ; cookie ( 7 ) ; nmod . on ( 1 , 4 ) AND nurse ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A bunny on the tree drew ., actual:  bunny ( 1 ) ; * tree ( 4 ) ; nmod . on ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 4 ), expected: bunny ( 1 ) ; * tree ( 4 ) ; nmod . on ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The dog on a chair ate a jigsaw on the paper ., actual:  * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The boy beside a yacht cleaned ., actual:  * boy ( 1 ) ; yacht ( 4 ) ; nmod . beside ( 1 , 4 ) AND clean ( 5 ) AND agent ( 5 , 4 ), expected: * boy ( 1 ) ; yacht ( 4 ) ; nmod . beside ( 1 , 4 ) AND clean ( 5 ) AND agent ( 5 , 1 )
mismatched part: nmod . on ( 11 , 14 ) (expected) != nmod . on ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A deer beside the table gave Emma a sweetcorn in the garden ., actual:  deer ( 1 ) ; * table ( 4 ) ; Emma ( 6 ) ; sweetcorn ( 8 ) ; * garden ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . in ( 8 , 11 ), expected: deer ( 1 ) ; * table ( 4 ) ; Emma ( 6 ) ; sweetcorn ( 8 ) ; * garden ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . in ( 8 , 11 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 11 ) (actual)
mismatched part: theme ( 9 , 11 ) (expected) != theme ( 9 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: recipient ( 5 , 6 ) (expected) != recipient ( 5 , 8 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl beside the road cried ., actual:  * girl ( 1 ) ; * road ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 ), expected: * girl ( 1 ) ; * road ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The boy on a table called ., actual:  * boy ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND call ( 5 ) AND agent ( 5 , 4 ), expected: * boy ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND call ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 11 ) (actual)
mismatched part: theme ( 9 , 11 ) (expected) != theme ( 9 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A child beside a stage gave Emma a donut beside the house ., actual:  child ( 1 ) ; stage ( 4 ) ; Emma ( 6 ) ; donut ( 8 ) ; * house ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 ), expected: child ( 1 ) ; stage ( 4 ) ; Emma ( 6 ) ; donut ( 8 ) ; * house ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: nmod . beside ( 11 , 14 ) (expected) != nmod . beside ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The cat on a bible ate the donut ., actual:  * cat ( 1 ) ; bible ( 4 ) ; * donut ( 7 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: * cat ( 1 ) ; bible ( 4 ) ; * donut ( 7 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A professor beside the bed smiled ., actual:  professor ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: professor ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A boy on a table drew a baby ., actual:  boy ( 1 ) ; table ( 4 ) ; baby ( 7 ) ; nmod . on ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: boy ( 1 ) ; table ( 4 ) ; baby ( 7 ) ; nmod . on ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The cat on the canvas gave the glue beside a table to a girl ., actual:  * cat ( 1 ) ; * canvas ( 4 ) ; * glue ( 7 ) ; table ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 ), expected: * cat ( 1 ) ; * canvas ( 4 ) ; * glue ( 7 ) ; table ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl beside the table saw the cat in a car ., actual:  girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The teacher in a house awarded a cookie beside a seat to the bee ., actual:  * teacher ( 1 ) ; house ( 4 ) ; cookie ( 7 ) ; seat ( 10 ) ; * bee ( 13 ) ; nmod . in ( 1 , 4 ) AND award ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 ), expected: * teacher ( 1 ) ; house ( 4 ) ; cookie ( 7 ) ; seat ( 10 ) ; * bee ( 13 ) ; nmod . in ( 1 , 4 ) AND award ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: recipient ( 5 , 6 ) (expected) != recipient ( 5 , 8 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The cat on the tabletop sold the princess a cake beside a monkey ., actual:  * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 ), expected: * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl beside a sword ate a fruit in the house ., actual:  girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A cat in the blender ate ., actual:  cat ( 1 ) ; * blender ( 4 ) ; nmod . in ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ), expected: cat ( 1 ) ; * blender ( 4 ) ; nmod . in ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: crumple ( 8 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: house ( 7 ) (expected) != crumple ( 8 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != house ( 7 ) (actual)
mismatched part: nmod . on ( 11 , 14 ) (expected) != nmod . on ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . beside ( 7 , 10 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The boy beside a bed gave Audrey a cake on the pedestal ., actual:  * boy ( 1 ) ; bed ( 4 ) ; Audrey ( 6 ) ; cake ( 8 ) ; * pedestal ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 ), expected: * boy ( 1 ) ; bed ( 4 ) ; Audrey ( 6 ) ; cake ( 8 ) ; * pedestal ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl on a table liked a journalist on a stage ., actual:  * girl ( 1 ) ; table ( 4 ) ; journalist ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * girl ( 1 ) ; table ( 4 ) ; journalist ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in the room cried ., actual:  girl ( 1 ) ; * room ( 4 ) ; nmod . in ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; * room ( 4 ) ; nmod . in ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The mouse in the crate liked a professor on the road ., actual:  * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != sleep ( 8 ) (actual)
mismatched part: sleep ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl beside the chair smiled ., actual:  * girl ( 1 ) ; * chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: * girl ( 1 ) ; * chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl in a house scoffed ., actual:  * girl ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 4 ), expected: * girl ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl on a tray served the cat a cake ., actual:  * girl ( 1 ) ; tray ( 4 ) ; * cat ( 7 ) ; cake ( 9 ) ; nmod . on ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ), expected: * girl ( 1 ) ; tray ( 4 ) ; * cat ( 7 ) ; cake ( 9 ) ; nmod . on ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A doctor beside the stage grew a box beside the table ., actual:  doctor ( 1 ) ; * stage ( 4 ) ; box ( 7 ) ; * table ( 10 ) ; nmod . beside ( 1 , 4 ) AND grow ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: doctor ( 1 ) ; * stage ( 4 ) ; box ( 7 ) ; * table ( 10 ) ; nmod . beside ( 1 , 4 ) AND grow ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A puppy in the car juggled ., actual:  puppy ( 1 ) ; * car ( 4 ) ; nmod . in ( 1 , 4 ) AND juggle ( 5 ) AND agent ( 5 , 4 ), expected: puppy ( 1 ) ; * car ( 4 ) ; nmod . in ( 1 , 4 ) AND juggle ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: cot ( 7 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != cot ( 7 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in the car liked a bottle in the house ., actual:  girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: recipient ( 5 , 6 ) (expected) != recipient ( 5 , 8 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != road ( 4 ) (actual)
mismatched part: road ( 4 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in a house sold the cake beside the stage to Emma ., actual:  girl ( 1 ) ; house ( 4 ) ; * cake ( 7 ) ; * stage ( 10 ) ; Emma ( 12 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 ), expected: girl ( 1 ) ; house ( 4 ) ; * cake ( 7 ) ; * stage ( 10 ) ; Emma ( 12 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The resident on a computer gave a cake beside a helicopter to the girl ., actual:  * resident ( 1 ) ; computer ( 4 ) ; cake ( 7 ) ; helicopter ( 10 ) ; * girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 ), expected: * resident ( 1 ) ; computer ( 4 ) ; cake ( 7 ) ; helicopter ( 10 ) ; * girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != poke ( 9 ) (actual)
mismatched part: poke ( 9 ) (expected) != recipient ( 9 , 7 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl in a glass served the boy a balloon ., actual:  * girl ( 1 ) ; glass ( 4 ) ; * boy ( 7 ) ; balloon ( 9 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ), expected: * girl ( 1 ) ; glass ( 4 ) ; * boy ( 7 ) ; balloon ( 9 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in the house gave the host a bat beside the pepper ., actual:  girl ( 1 ) ; * house ( 4 ) ; * host ( 7 ) ; bat ( 9 ) ; * pepper ( 12 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 ), expected: girl ( 1 ) ; * house ( 4 ) ; * host ( 7 ) ; bat ( 9 ) ; * pepper ( 12 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: house ( 7 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 1 , 4 ) (expected) != house ( 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . beside ( 1 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in a container gave the brush in the cart to a duke ., actual:  girl ( 1 ) ; container ( 4 ) ; * brush ( 7 ) ; * cart ( 10 ) ; duke ( 13 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 ), expected: girl ( 1 ) ; container ( 4 ) ; * brush ( 7 ) ; * cart ( 10 ) ; duke ( 13 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The dog on a table snored ., actual:  * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 4 ), expected: * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl beside the table rolled the cake beside the tree ., actual:  * girl ( 1 ) ; * table ( 4 ) ; * cake ( 7 ) ; * tree ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: * girl ( 1 ) ; * table ( 4 ) ; * cake ( 7 ) ; * tree ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on the surface cried ., actual:  girl ( 1 ) ; * surface ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; * surface ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )
mismatched part: freeze ( 8 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: girl ( 1 ) (expected) != freeze ( 8 ) (actual)
mismatched part: house ( 4 ) (expected) != girl ( 1 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != house ( 4 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl beside the table packed a cake ., actual:  girl ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND pack ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: girl ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND pack ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The boy on the table laughed ., actual:  * boy ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 4 ), expected: * boy ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The boy in a house froze the sailor in a can ., actual:  * boy ( 1 ) ; house ( 4 ) ; * sailor ( 7 ) ; can ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: * boy ( 1 ) ; house ( 4 ) ; * sailor ( 7 ) ; can ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl beside a table rented Camila the cake beside the bed ., actual:  * girl ( 1 ) ; table ( 4 ) ; Camila ( 6 ) ; * cake ( 8 ) ; * bed ( 11 ) ; nmod . beside ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 ), expected: * girl ( 1 ) ; table ( 4 ) ; Camila ( 6 ) ; * cake ( 8 ) ; * bed ( 11 ) ; nmod . beside ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A creature in the house drew ., actual:  creature ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 4 ), expected: creature ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The consumer on the bed gave Evelyn a molecule beside the duck ., actual:  * consumer ( 1 ) ; * bed ( 4 ) ; Evelyn ( 6 ) ; molecule ( 8 ) ; * duck ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 ), expected: * consumer ( 1 ) ; * bed ( 4 ) ; Evelyn ( 6 ) ; molecule ( 8 ) ; * duck ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on the panel drew ., actual:  girl ( 1 ) ; * panel ( 4 ) ; nmod . on ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; * panel ( 4 ) ; nmod . on ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 11 ) (actual)
mismatched part: theme ( 9 , 11 ) (expected) != theme ( 9 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A child on the bed poked a brush in the car ., actual:  child ( 1 ) ; * bed ( 4 ) ; brush ( 7 ) ; * car ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: child ( 1 ) ; * bed ( 4 ) ; brush ( 7 ) ; * car ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The child beside a chair ate the rose beside a shoe ., actual:  * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The child on a table burned the pizza beside a stage ., actual:  * child ( 1 ) ; table ( 4 ) ; * pizza ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND burn ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: * child ( 1 ) ; table ( 4 ) ; * pizza ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND burn ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: nmod . on ( 11 , 14 ) (expected) != nmod . on ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != recipient ( 9 , 7 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The dog on a table scoffed ., actual:  * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 4 ), expected: * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The chicken on a table rented the bean on the log to a girl ., actual:  * chicken ( 1 ) ; table ( 4 ) ; * bean ( 7 ) ; * log ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 ), expected: * chicken ( 1 ) ; table ( 4 ) ; * bean ( 7 ) ; * log ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
mismatched part: nmod . beside ( 11 , 14 ) (expected) != nmod . beside ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A boy on a bed sent the cat a donut ., actual:  boy ( 1 ) ; bed ( 4 ) ; * cat ( 7 ) ; donut ( 9 ) ; nmod . on ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ), expected: boy ( 1 ) ; bed ( 4 ) ; * cat ( 7 ) ; donut ( 9 ) ; nmod . on ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
mismatched part: nmod . beside ( 11 , 14 ) (expected) != nmod . beside ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A pony on a crack fed the guitar beside a broker to the sailor ., actual:  pony ( 1 ) ; crack ( 4 ) ; * guitar ( 7 ) ; broker ( 10 ) ; * sailor ( 13 ) ; nmod . on ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 ), expected: pony ( 1 ) ; crack ( 4 ) ; * guitar ( 7 ) ; broker ( 10 ) ; * sailor ( 13 ) ; nmod . on ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A teacher on the table cried ., actual:  teacher ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 ), expected: teacher ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A friend beside the table ate ., actual:  friend ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ), expected: friend ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl in the tin fed the cake beside a clock to Liam ., actual:  * girl ( 1 ) ; * tin ( 4 ) ; * cake ( 7 ) ; clock ( 10 ) ; Liam ( 12 ) ; nmod . in ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 ), expected: * girl ( 1 ) ; * tin ( 4 ) ; * cake ( 7 ) ; clock ( 10 ) ; Liam ( 12 ) ; nmod . in ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl beside a bed crumpled the goose in the basin ., actual:  * girl ( 1 ) ; bed ( 4 ) ; * goose ( 7 ) ; * basin ( 10 ) ; nmod . beside ( 1 , 4 ) AND crumple ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: * girl ( 1 ) ; bed ( 4 ) ; * goose ( 7 ) ; * basin ( 10 ) ; nmod . beside ( 1 , 4 ) AND crumple ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The boy on the stage offered the girl a cookie ., actual:  * boy ( 1 ) ; * stage ( 4 ) ; * girl ( 7 ) ; cookie ( 9 ) ; nmod . on ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ), expected: * boy ( 1 ) ; * stage ( 4 ) ; * girl ( 7 ) ; cookie ( 9 ) ; nmod . on ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl on the table collapsed the rose on the trampoline ., actual:  * girl ( 1 ) ; * table ( 4 ) ; * rose ( 7 ) ; * trampoline ( 10 ) ; nmod . on ( 1 , 4 ) AND collapse ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * girl ( 1 ) ; * table ( 4 ) ; * rose ( 7 ) ; * trampoline ( 10 ) ; nmod . on ( 1 , 4 ) AND collapse ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A baby in the car offered a cake on a bible to Charlotte ., actual:  baby ( 1 ) ; * car ( 4 ) ; cake ( 7 ) ; bible ( 10 ) ; Charlotte ( 12 ) ; nmod . in ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 ), expected: baby ( 1 ) ; * car ( 4 ) ; cake ( 7 ) ; bible ( 10 ) ; Charlotte ( 12 ) ; nmod . in ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl beside a stage cooked a cake in the shoe ., actual:  girl ( 1 ) ; stage ( 4 ) ; cake ( 7 ) ; * shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: girl ( 1 ) ; stage ( 4 ) ; cake ( 7 ) ; * shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The frog on a stage packed ., actual:  * frog ( 1 ) ; stage ( 4 ) ; nmod . on ( 1 , 4 ) AND pack ( 5 ) AND agent ( 5 , 4 ), expected: * frog ( 1 ) ; stage ( 4 ) ; nmod . on ( 1 , 4 ) AND pack ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A boy on the plate jogged ., actual:  boy ( 1 ) ; * plate ( 4 ) ; nmod . on ( 1 , 4 ) AND jog ( 5 ) AND agent ( 5 , 4 ), expected: boy ( 1 ) ; * plate ( 4 ) ; nmod . on ( 1 , 4 ) AND jog ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A boy beside a broker lended Emma the melon on the plate ., actual:  boy ( 1 ) ; broker ( 4 ) ; Emma ( 6 ) ; * melon ( 8 ) ; * plate ( 11 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 ), expected: boy ( 1 ) ; broker ( 4 ) ; Emma ( 6 ) ; * melon ( 8 ) ; * plate ( 11 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != recipient ( 9 , 7 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A buyer beside the table rolled the cake in the backpack ., actual:  buyer ( 1 ) ; * table ( 4 ) ; * cake ( 7 ) ; * backpack ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: buyer ( 1 ) ; * table ( 4 ) ; * cake ( 7 ) ; * backpack ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A fish on a leaflet loaned the cat the donut beside the stage ., actual:  fish ( 1 ) ; leaflet ( 4 ) ; * cat ( 7 ) ; * donut ( 9 ) ; * stage ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 ), expected: fish ( 1 ) ; leaflet ( 4 ) ; * cat ( 7 ) ; * donut ( 9 ) ; * stage ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != recipient ( 8 , 10 ) (actual)
mismatched part: recipient ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A priest on the box admired a cake on the table ., actual:  priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . beside ( 7 , 10 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A chicken in a car served a cat a box in the bun ., actual:  chicken ( 1 ) ; car ( 4 ) ; cat ( 7 ) ; box ( 9 ) ; * bun ( 12 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . in ( 9 , 12 ), expected: chicken ( 1 ) ; car ( 4 ) ; cat ( 7 ) ; box ( 9 ) ; * bun ( 12 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . in ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The president beside a bed painted a cake ., actual:  * president ( 1 ) ; bed ( 4 ) ; cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: * president ( 1 ) ; bed ( 4 ) ; cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on the dog handed a cat the raisin on a table ., actual:  girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 ), expected: girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The boy on a towel gave the frog the cake on a booklet ., actual:  * boy ( 1 ) ; towel ( 4 ) ; * frog ( 7 ) ; * cake ( 9 ) ; booklet ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 ), expected: * boy ( 1 ) ; towel ( 4 ) ; * frog ( 7 ) ; * cake ( 9 ) ; booklet ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The cat beside the stool gave a cake in a cup to a customer ., actual:  * cat ( 1 ) ; * stool ( 4 ) ; cake ( 7 ) ; cup ( 10 ) ; customer ( 13 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 ), expected: * cat ( 1 ) ; * stool ( 4 ) ; cake ( 7 ) ; cup ( 10 ) ; customer ( 13 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != stage ( 7 ) (actual)
mismatched part: stage ( 7 ) (expected) != stutter ( 8 ) (actual)
mismatched part: stutter ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . beside ( 7 , 9 ) (actual)
mismatched part: theme ( 8 , 9 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: nmod . beside ( 11 , 14 ) (expected) != nmod . beside ( 8 , 14 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
defaultdict(<class 'int'>, {'multiple,agent=left,theme=None': 22, 'diff_length_skip,agent=left,theme=None': 65, 'agent=left,theme=None,part=agent': 65, 'multiple,agent=right or middle,theme=middle': 22, 'multiple,agent=left,theme=right': 119, 'diff_length_skip,agent=right or middle,theme=left': 74, 'agent=left,theme=middle,part=agent': 19, 'more_than_one_verb_not_v_inf_skip': 73, 'multiple,agent=right or middle,theme=None': 76, 'agent=right or middle,theme=None,part=recipient': 22, 'diff_length_skip,agent=right or middle,theme=None': 61, 'agent=left,theme=right,part=agent': 56, 'cp_skip': 77, 'multiple,agent=left,theme=middle': 29, 'multiple,agent=right or middle,theme=left': 3, 'agent=right or middle,theme=middle,part=recipient': 12, 'diff_length_skip,agent=left,theme=right': 2, 'agent=right or middle,theme=left,part=other': 3})
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != scream ( 8 ) (actual)
mismatched part: scream ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: recipient ( 5 , 6 ) (expected) != recipient ( 5 , 8 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != sneeze ( 8 ) (actual)
mismatched part: sneeze ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != nmod . on ( 7 , 10 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The sailor in a house lended a biscuit on a table to a goose ., actual:  * sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 ), expected: * sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A bear in the car froze the key on the table ., actual:  bear ( 1 ) ; * car ( 4 ) ; * key ( 7 ) ; * table ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: bear ( 1 ) ; * car ( 4 ) ; * key ( 7 ) ; * table ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl beside the bed lended the manager the leaf ., actual:  * girl ( 1 ) ; * bed ( 4 ) ; * manager ( 7 ) ; * leaf ( 9 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ), expected: * girl ( 1 ) ; * bed ( 4 ) ; * manager ( 7 ) ; * leaf ( 9 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A baby on a truck slept ., actual:  baby ( 1 ) ; truck ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: baby ( 1 ) ; truck ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in a hole slept ., actual:  girl ( 1 ) ; hole ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; hole ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: nmod . beside ( 1 , 4 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . beside ( 1 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != nurse ( 8 ) (actual)
mismatched part: nurse ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The child in a drawer gave Amelia a box beside the machine ., actual:  * child ( 1 ) ; drawer ( 4 ) ; Amelia ( 6 ) ; box ( 8 ) ; * machine ( 11 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 ), expected: * child ( 1 ) ; drawer ( 4 ) ; Amelia ( 6 ) ; box ( 8 ) ; * machine ( 11 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 11 ) (actual)
mismatched part: theme ( 9 , 11 ) (expected) != theme ( 9 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A driver beside the bed smiled ., actual:  driver ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: driver ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A scientist on the desk admired the cake beside the chair ., actual:  scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A horse on the cake investigated the melon on a box ., actual:  horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The monster beside a road smiled ., actual:  * monster ( 1 ) ; road ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: * monster ( 1 ) ; road ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . in ( 7 , 10 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl in the house liked a cake beside a bed ., actual:  * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != sneeze ( 5 ) (actual)
mismatched part: sneeze ( 5 ) (expected) != student ( 1 ) (actual)
mismatched part: student ( 1 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The champion beside a table liked a cake on the computer ., actual:  * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != table ( 7 ) (actual)
mismatched part: table ( 7 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The boy in the vase sent the cake on a table to a cat ., actual:  * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 ), expected: * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: house ( 4 ) (expected) != agent ( 9 , 7 ) (actual)
mismatched part: nmod . beside ( 1 , 4 ) (expected) != house ( 4 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . beside ( 1 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The child on the pad ate the cat ., actual:  * child ( 1 ) ; * pad ( 4 ) ; * cat ( 7 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: * child ( 1 ) ; * pad ( 4 ) ; * cat ( 7 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The wolf in the house offered the donut on the dish to Sophia ., actual:  * wolf ( 1 ) ; * house ( 4 ) ; * donut ( 7 ) ; * dish ( 10 ) ; Sophia ( 12 ) ; nmod . in ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 ), expected: * wolf ( 1 ) ; * house ( 4 ) ; * donut ( 7 ) ; * dish ( 10 ) ; Sophia ( 12 ) ; nmod . in ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The frog on a mattress ate the radio on the bike ., actual:  * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != boat ( 4 ) (actual)
mismatched part: boat ( 4 ) (expected) != cook ( 5 ) (actual)
mismatched part: cook ( 5 ) (expected) != girl ( 1 ) (actual)
mismatched part: girl ( 1 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The cat in a house adored the donut on a stage ., actual:  * cat ( 1 ) ; house ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * cat ( 1 ) ; house ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != boy ( 1 ) (actual)
mismatched part: boy ( 1 ) (expected) != house ( 4 ) (actual)
mismatched part: house ( 4 ) (expected) != hunt ( 5 ) (actual)
mismatched part: hunt ( 5 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != sleep ( 8 ) (actual)
mismatched part: sleep ( 8 ) (expected) != stage ( 7 ) (actual)
mismatched part: stage ( 7 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . in ( 7 , 9 ) (actual)
mismatched part: theme ( 8 , 9 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl beside a table slept ., actual:  * girl ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: * girl ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The horse on the stack loaned the lollipop on a table to Isaac ., actual:  * horse ( 1 ) ; * stack ( 4 ) ; * lollipop ( 7 ) ; table ( 10 ) ; Isaac ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 ), expected: * horse ( 1 ) ; * stack ( 4 ) ; * lollipop ( 7 ) ; table ( 10 ) ; Isaac ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The cat on the table awarded a cake on the stand to Oliver ., actual:  * cat ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * stand ( 10 ) ; Oliver ( 12 ) ; nmod . on ( 1 , 4 ) AND award ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 ), expected: * cat ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * stand ( 10 ) ; Oliver ( 12 ) ; nmod . on ( 1 , 4 ) AND award ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The teacher on the table gave Liam a cake on the tripod ., actual:  * teacher ( 1 ) ; * table ( 4 ) ; Liam ( 6 ) ; cake ( 8 ) ; * tripod ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 ), expected: * teacher ( 1 ) ; * table ( 4 ) ; Liam ( 6 ) ; cake ( 8 ) ; * tripod ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on a rock smiled ., actual:  girl ( 1 ) ; rock ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; rock ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The fish beside the seat offered the hamburger beside a key to a frog ., actual:  * fish ( 1 ) ; * seat ( 4 ) ; * hamburger ( 7 ) ; key ( 10 ) ; frog ( 13 ) ; nmod . beside ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 ), expected: * fish ( 1 ) ; * seat ( 4 ) ; * hamburger ( 7 ) ; key ( 10 ) ; frog ( 13 ) ; nmod . beside ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The frog on the table gave a cake beside the bottle to James ., actual:  * frog ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * bottle ( 10 ) ; James ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 ), expected: * frog ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * bottle ( 10 ) ; James ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A boy in the house lended the mouse the cake beside a seat ., actual:  boy ( 1 ) ; * house ( 4 ) ; * mouse ( 7 ) ; * cake ( 9 ) ; seat ( 12 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 ), expected: boy ( 1 ) ; * house ( 4 ) ; * mouse ( 7 ) ; * cake ( 9 ) ; seat ( 12 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 11 ) (actual)
mismatched part: theme ( 9 , 11 ) (expected) != theme ( 9 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The frog in a house slept ., actual:  * frog ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: * frog ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The dog on the stage ate the boy on a seat ., actual:  * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . in ( 7 , 9 ) (actual)
mismatched part: theme ( 8 , 9 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . in ( 7 , 9 ) (actual)
mismatched part: theme ( 8 , 9 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A bird on a train liked a cake beside a box ., actual:  bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != boy ( 1 ) (actual)
mismatched part: boy ( 1 ) (expected) != hunt ( 5 ) (actual)
mismatched part: hunt ( 5 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != room ( 4 ) (actual)
mismatched part: room ( 4 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The cat on a boat gave the box on a table to a boy ., actual:  * cat ( 1 ) ; boat ( 4 ) ; * box ( 7 ) ; table ( 10 ) ; boy ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 ), expected: * cat ( 1 ) ; boat ( 4 ) ; * box ( 7 ) ; table ( 10 ) ; boy ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A dog in the house liked a cake ., actual:  dog ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: dog ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in a room sent a frog a cake beside the pillar ., actual:  girl ( 1 ) ; room ( 4 ) ; frog ( 7 ) ; cake ( 9 ) ; * pillar ( 12 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 ), expected: girl ( 1 ) ; room ( 4 ) ; frog ( 7 ) ; cake ( 9 ) ; * pillar ( 12 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != computer ( 4 ) (actual)
mismatched part: computer ( 4 ) (expected) != draw ( 5 ) (actual)
mismatched part: draw ( 5 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != squirrel ( 1 ) (actual)
mismatched part: squirrel ( 1 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The horse on a bed slept ., actual:  * horse ( 1 ) ; bed ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: * horse ( 1 ) ; bed ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The prince in a bin smiled ., actual:  * prince ( 1 ) ; bin ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: * prince ( 1 ) ; bin ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl beside a stage lended the cake in the house to Liam ., actual:  * girl ( 1 ) ; stage ( 4 ) ; * cake ( 7 ) ; * house ( 10 ) ; Liam ( 12 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . in ( 7 , 10 ), expected: * girl ( 1 ) ; stage ( 4 ) ; * cake ( 7 ) ; * house ( 10 ) ; Liam ( 12 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != see ( 8 ) (actual)
mismatched part: see ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The turkey in the storage held a cake beside a table ., actual:  * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != cat ( 1 ) (actual)
mismatched part: cat ( 1 ) (expected) != hunt ( 5 ) (actual)
mismatched part: hunt ( 5 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != surface ( 4 ) (actual)
mismatched part: surface ( 4 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: recipient ( 5 , 6 ) (expected) != recipient ( 5 , 8 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The teacher in the trap slept ., actual:  * teacher ( 1 ) ; * trap ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: * teacher ( 1 ) ; * trap ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != room ( 4 ) (actual)
mismatched part: room ( 4 ) (expected) != smile ( 8 ) (actual)
mismatched part: smile ( 8 ) (expected) != stage ( 7 ) (actual)
mismatched part: stage ( 7 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The chicken on the table poked the child in a cup ., actual:  * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . in ( 7 , 10 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in a envelope sold Liam the cake beside the computer ., actual:  girl ( 1 ) ; envelope ( 4 ) ; Liam ( 6 ) ; * cake ( 8 ) ; * computer ( 11 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 ), expected: girl ( 1 ) ; envelope ( 4 ) ; Liam ( 6 ) ; * cake ( 8 ) ; * computer ( 11 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The monkey on the futon gave the cat a pretzel ., actual:  * monkey ( 1 ) ; * futon ( 4 ) ; * cat ( 7 ) ; pretzel ( 9 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ), expected: * monkey ( 1 ) ; * futon ( 4 ) ; * cat ( 7 ) ; pretzel ( 9 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A dog in the wardrobe smiled ., actual:  dog ( 1 ) ; * wardrobe ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: dog ( 1 ) ; * wardrobe ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on the table ate the ball in a cafe ., actual:  girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: block ( 1 ) (expected) != agent ( 9 , 7 ) (actual)
mismatched part: eat ( 9 ) (expected) != block ( 1 ) (actual)
mismatched part: nmod . beside ( 1 , 4 ) (expected) != eat ( 9 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . beside ( 1 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl in the taxi slept ., actual:  * girl ( 1 ) ; * taxi ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: * girl ( 1 ) ; * taxi ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: recipient ( 5 , 7 ) (expected) != recipient ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 7 ) (actual)
mismatched part: theme ( 9 , 11 ) (expected) != theme ( 9 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The dog on a chair ate a jigsaw on the paper ., actual:  * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on a table smiled ., actual:  girl ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A deer beside the table gave Emma a sweetcorn in the garden ., actual:  deer ( 1 ) ; * table ( 4 ) ; Emma ( 6 ) ; sweetcorn ( 8 ) ; * garden ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . in ( 8 , 11 ), expected: deer ( 1 ) ; * table ( 4 ) ; Emma ( 6 ) ; sweetcorn ( 8 ) ; * garden ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . in ( 8 , 11 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 11 ) (actual)
mismatched part: theme ( 9 , 11 ) (expected) != theme ( 9 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: recipient ( 5 , 6 ) (expected) != recipient ( 5 , 8 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != cat ( 1 ) (actual)
mismatched part: cat ( 1 ) (expected) != house ( 4 ) (actual)
mismatched part: house ( 4 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != paint ( 5 ) (actual)
mismatched part: paint ( 5 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 11 ) (actual)
mismatched part: theme ( 9 , 11 ) (expected) != theme ( 9 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A child beside a stage gave Emma a donut beside the house ., actual:  child ( 1 ) ; stage ( 4 ) ; Emma ( 6 ) ; donut ( 8 ) ; * house ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 ), expected: child ( 1 ) ; stage ( 4 ) ; Emma ( 6 ) ; donut ( 8 ) ; * house ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != car ( 4 ) (actual)
mismatched part: car ( 4 ) (expected) != hear ( 5 ) (actual)
mismatched part: hear ( 5 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != servant ( 1 ) (actual)
mismatched part: servant ( 1 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A professor beside the bed smiled ., actual:  professor ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: professor ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The cat on the canvas gave the glue beside a table to a girl ., actual:  * cat ( 1 ) ; * canvas ( 4 ) ; * glue ( 7 ) ; table ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 ), expected: * cat ( 1 ) ; * canvas ( 4 ) ; * glue ( 7 ) ; table ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl beside the table saw the cat in a car ., actual:  girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The teacher in a house awarded a cookie beside a seat to the bee ., actual:  * teacher ( 1 ) ; house ( 4 ) ; cookie ( 7 ) ; seat ( 10 ) ; * bee ( 13 ) ; nmod . in ( 1 , 4 ) AND award ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 ), expected: * teacher ( 1 ) ; house ( 4 ) ; cookie ( 7 ) ; seat ( 10 ) ; * bee ( 13 ) ; nmod . in ( 1 , 4 ) AND award ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: recipient ( 5 , 6 ) (expected) != recipient ( 5 , 8 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The cat on the tabletop sold the princess a cake beside a monkey ., actual:  * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 ), expected: * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl beside a sword ate a fruit in the house ., actual:  girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: recipient ( 5 , 6 ) (expected) != recipient ( 5 , 8 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The citizen beside the duck adored the drink ., actual:  * citizen ( 1 ) ; * duck ( 4 ) ; * drink ( 7 ) ; nmod . beside ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: * citizen ( 1 ) ; * duck ( 4 ) ; * drink ( 7 ) ; nmod . beside ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: agent ( 5 , 1 ) (expected) != basket ( 4 ) (actual)
mismatched part: basket ( 4 ) (expected) != cook ( 5 ) (actual)
mismatched part: cook ( 5 ) (expected) != girl ( 1 ) (actual)
mismatched part: girl ( 1 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: crumple ( 8 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: house ( 7 ) (expected) != crumple ( 8 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != house ( 7 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The boy beside a bed gave Audrey a cake on the pedestal ., actual:  * boy ( 1 ) ; bed ( 4 ) ; Audrey ( 6 ) ; cake ( 8 ) ; * pedestal ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 ), expected: * boy ( 1 ) ; bed ( 4 ) ; Audrey ( 6 ) ; cake ( 8 ) ; * pedestal ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: recipient ( 5 , 7 ) (expected) != recipient ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The mouse in the crate liked a professor on the road ., actual:  * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != sleep ( 8 ) (actual)
mismatched part: sleep ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl beside the chair smiled ., actual:  * girl ( 1 ) ; * chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: * girl ( 1 ) ; * chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl in a house scoffed ., actual:  * girl ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 4 ), expected: * girl ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != clean ( 5 ) (actual)
mismatched part: clean ( 5 ) (expected) != girl ( 1 ) (actual)
mismatched part: girl ( 1 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != stand ( 4 ) (actual)
mismatched part: stand ( 4 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: cot ( 7 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != cot ( 7 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in the car liked a bottle in the house ., actual:  girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: recipient ( 5 , 6 ) (expected) != recipient ( 5 , 8 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != road ( 4 ) (actual)
mismatched part: road ( 4 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in a house sold the cake beside the stage to Emma ., actual:  girl ( 1 ) ; house ( 4 ) ; * cake ( 7 ) ; * stage ( 10 ) ; Emma ( 12 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 ), expected: girl ( 1 ) ; house ( 4 ) ; * cake ( 7 ) ; * stage ( 10 ) ; Emma ( 12 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The resident on a computer gave a cake beside a helicopter to the girl ., actual:  * resident ( 1 ) ; computer ( 4 ) ; cake ( 7 ) ; helicopter ( 10 ) ; * girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 ), expected: * resident ( 1 ) ; computer ( 4 ) ; cake ( 7 ) ; helicopter ( 10 ) ; * girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: recipient ( 5 , 7 ) (expected) != recipient ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in the house gave the host a bat beside the pepper ., actual:  girl ( 1 ) ; * house ( 4 ) ; * host ( 7 ) ; bat ( 9 ) ; * pepper ( 12 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 ), expected: girl ( 1 ) ; * house ( 4 ) ; * host ( 7 ) ; bat ( 9 ) ; * pepper ( 12 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: house ( 7 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 1 , 4 ) (expected) != house ( 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . beside ( 1 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in a container gave the brush in the cart to a duke ., actual:  girl ( 1 ) ; container ( 4 ) ; * brush ( 7 ) ; * cart ( 10 ) ; duke ( 13 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 ), expected: girl ( 1 ) ; container ( 4 ) ; * brush ( 7 ) ; * cart ( 10 ) ; duke ( 13 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: freeze ( 8 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: girl ( 1 ) (expected) != freeze ( 8 ) (actual)
mismatched part: house ( 4 ) (expected) != girl ( 1 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != house ( 4 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != bag ( 4 ) (actual)
mismatched part: bag ( 4 ) (expected) != frog ( 1 ) (actual)
mismatched part: frog ( 1 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != sketch ( 5 ) (actual)
mismatched part: sketch ( 5 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The boy on the table laughed ., actual:  * boy ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 4 ), expected: * boy ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The consumer on the bed gave Evelyn a molecule beside the duck ., actual:  * consumer ( 1 ) ; * bed ( 4 ) ; Evelyn ( 6 ) ; molecule ( 8 ) ; * duck ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 ), expected: * consumer ( 1 ) ; * bed ( 4 ) ; Evelyn ( 6 ) ; molecule ( 8 ) ; * duck ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 11 ) (actual)
mismatched part: theme ( 9 , 11 ) (expected) != theme ( 9 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A boy beside a chair laughed ., actual:  boy ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 4 ), expected: boy ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A child on the bed poked a brush in the car ., actual:  child ( 1 ) ; * bed ( 4 ) ; brush ( 7 ) ; * car ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: child ( 1 ) ; * bed ( 4 ) ; brush ( 7 ) ; * car ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The child beside a chair ate the rose beside a shoe ., actual:  * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != girl ( 1 ) (actual)
mismatched part: girl ( 1 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != sketch ( 5 ) (actual)
mismatched part: sketch ( 5 ) (expected) != surface ( 4 ) (actual)
mismatched part: surface ( 4 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The dog on a table scoffed ., actual:  * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 4 ), expected: * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The chicken on a table rented the bean on the log to a girl ., actual:  * chicken ( 1 ) ; table ( 4 ) ; * bean ( 7 ) ; * log ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 ), expected: * chicken ( 1 ) ; table ( 4 ) ; * bean ( 7 ) ; * log ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: recipient ( 5 , 7 ) (expected) != recipient ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A pony on a crack fed the guitar beside a broker to the sailor ., actual:  pony ( 1 ) ; crack ( 4 ) ; * guitar ( 7 ) ; broker ( 10 ) ; * sailor ( 13 ) ; nmod . on ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 ), expected: pony ( 1 ) ; crack ( 4 ) ; * guitar ( 7 ) ; broker ( 10 ) ; * sailor ( 13 ) ; nmod . on ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A teacher beside a table danced ., actual:  teacher ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 4 ), expected: teacher ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl in the tin fed the cake beside a clock to Liam ., actual:  * girl ( 1 ) ; * tin ( 4 ) ; * cake ( 7 ) ; clock ( 10 ) ; Liam ( 12 ) ; nmod . in ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 ), expected: * girl ( 1 ) ; * tin ( 4 ) ; * cake ( 7 ) ; clock ( 10 ) ; Liam ( 12 ) ; nmod . in ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: recipient ( 5 , 7 ) (expected) != recipient ( 5 , 9 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A child in a car smiled ., actual:  child ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: child ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A baby in the car offered a cake on a bible to Charlotte ., actual:  baby ( 1 ) ; * car ( 4 ) ; cake ( 7 ) ; bible ( 10 ) ; Charlotte ( 12 ) ; nmod . in ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 ), expected: baby ( 1 ) ; * car ( 4 ) ; cake ( 7 ) ; bible ( 10 ) ; Charlotte ( 12 ) ; nmod . in ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The mouse on a table gave the donut in the nest to a cat ., actual:  * mouse ( 1 ) ; table ( 4 ) ; * donut ( 7 ) ; * nest ( 10 ) ; cat ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 ), expected: * mouse ( 1 ) ; table ( 4 ) ; * donut ( 7 ) ; * nest ( 10 ) ; cat ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A boy beside a broker lended Emma the melon on the plate ., actual:  boy ( 1 ) ; broker ( 4 ) ; Emma ( 6 ) ; * melon ( 8 ) ; * plate ( 11 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 ), expected: boy ( 1 ) ; broker ( 4 ) ; Emma ( 6 ) ; * melon ( 8 ) ; * plate ( 11 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: block ( 1 ) (expected) != agent ( 9 , 7 ) (actual)
mismatched part: nmod . beside ( 1 , 4 ) (expected) != block ( 1 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . beside ( 1 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A fish on a leaflet loaned the cat the donut beside the stage ., actual:  fish ( 1 ) ; leaflet ( 4 ) ; * cat ( 7 ) ; * donut ( 9 ) ; * stage ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 ), expected: fish ( 1 ) ; leaflet ( 4 ) ; * cat ( 7 ) ; * donut ( 9 ) ; * stage ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != recipient ( 8 , 10 ) (actual)
mismatched part: recipient ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != girl ( 1 ) (actual)
mismatched part: girl ( 1 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != observe ( 5 ) (actual)
mismatched part: observe ( 5 ) (expected) != stool ( 4 ) (actual)
mismatched part: stool ( 4 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on the dog handed a cat the raisin on a table ., actual:  girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 ), expected: girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The boy on a towel gave the frog the cake on a booklet ., actual:  * boy ( 1 ) ; towel ( 4 ) ; * frog ( 7 ) ; * cake ( 9 ) ; booklet ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 ), expected: * boy ( 1 ) ; towel ( 4 ) ; * frog ( 7 ) ; * cake ( 9 ) ; booklet ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The cat beside the stool gave a cake in a cup to a customer ., actual:  * cat ( 1 ) ; * stool ( 4 ) ; cake ( 7 ) ; cup ( 10 ) ; customer ( 13 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 ), expected: * cat ( 1 ) ; * stool ( 4 ) ; cake ( 7 ) ; cup ( 10 ) ; customer ( 13 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != stage ( 7 ) (actual)
mismatched part: stage ( 7 ) (expected) != stutter ( 8 ) (actual)
mismatched part: stutter ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 7 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A host beside a table smiled ., actual:  host ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: host ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The cat on the hanger rented the box to a child ., actual:  * cat ( 1 ) ; * hanger ( 4 ) ; * box ( 7 ) ; child ( 10 ) ; nmod . on ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 10 ), expected: * cat ( 1 ) ; * hanger ( 4 ) ; * box ( 7 ) ; child ( 10 ) ; nmod . on ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: theme ( 8 , 9 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
defaultdict(<class 'int'>, {'multiple,agent=left,theme=None': 37, 'diff_length_skip,agent=left,theme=None': 91, 'agent=right or middle,theme=middle,part=recipient': 30, 'multiple,agent=left,theme=right': 148, 'diff_length_skip,agent=right or middle,theme=left': 56, 'agent=left,theme=middle,part=agent': 21, 'more_than_one_verb_not_v_inf_skip': 73, 'multiple,agent=right or middle,theme=None': 63, 'diff_length_skip,agent=right or middle,theme=None': 53, 'agent=left,theme=right,part=agent': 22, 'agent=left,theme=None,part=agent': 41, 'cp_skip': 77, 'multiple,agent=left,theme=middle': 27, 'agent=right or middle,theme=None,part=recipient': 49, 'multiple,agent=right or middle,theme=middle': 4, 'multiple,agent=right or middle,theme=left': 3, 'diff_length_skip,agent=left,theme=right': 3})
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != scream ( 8 ) (actual)
mismatched part: scream ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The spokesman in the house served Emma the rose ., actual:  * spokesman ( 1 ) ; * house ( 4 ) ; Emma ( 6 ) ; * rose ( 8 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ), expected: * spokesman ( 1 ) ; * house ( 4 ) ; Emma ( 6 ) ; * rose ( 8 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
example agent error substitute nmod instead - input: A girl on the stool on the table drew a frog ., actual:  girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 7 ) AND theme ( 8 , 10 ), expected: girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 13 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in the house slept ., actual:  girl ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl beside the bed lended the manager the leaf ., actual:  * girl ( 1 ) ; * bed ( 4 ) ; * manager ( 7 ) ; * leaf ( 9 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ), expected: * girl ( 1 ) ; * bed ( 4 ) ; * manager ( 7 ) ; * leaf ( 9 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The baby beside a valve painted the cake ., actual:  * baby ( 1 ) ; valve ( 4 ) ; * cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: * baby ( 1 ) ; valve ( 4 ) ; * cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in a hole slept ., actual:  girl ( 1 ) ; hole ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; hole ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: nmod . beside ( 1 , 4 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . beside ( 1 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A baby in a garden called the raisin ., actual:  baby ( 1 ) ; garden ( 4 ) ; * raisin ( 7 ) ; nmod . in ( 1 , 4 ) AND call ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: baby ( 1 ) ; garden ( 4 ) ; * raisin ( 7 ) ; nmod . in ( 1 , 4 ) AND call ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The child in a drawer gave Amelia a box beside the machine ., actual:  * child ( 1 ) ; drawer ( 4 ) ; Amelia ( 6 ) ; box ( 8 ) ; * machine ( 11 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 ), expected: * child ( 1 ) ; drawer ( 4 ) ; Amelia ( 6 ) ; box ( 8 ) ; * machine ( 11 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The professor beside a table appreciated the key in a room ., actual:  * professor ( 1 ) ; table ( 4 ) ; * key ( 7 ) ; room ( 10 ) ; nmod . beside ( 1 , 4 ) AND appreciate ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: * professor ( 1 ) ; table ( 4 ) ; * key ( 7 ) ; room ( 10 ) ; nmod . beside ( 1 , 4 ) AND appreciate ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A cat on a bag cleaned a chemical in a house ., actual:  cat ( 1 ) ; bag ( 4 ) ; chemical ( 7 ) ; house ( 10 ) ; nmod . on ( 1 , 4 ) AND clean ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: cat ( 1 ) ; bag ( 4 ) ; chemical ( 7 ) ; house ( 10 ) ; nmod . on ( 1 , 4 ) AND clean ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A frog beside the table cried ., actual:  frog ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 ), expected: frog ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl beside a rock passed Dylan a pen on a box ., actual:  girl ( 1 ) ; rock ( 4 ) ; Dylan ( 6 ) ; pen ( 8 ) ; box ( 11 ) ; nmod . beside ( 1 , 4 ) AND pass ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 ), expected: girl ( 1 ) ; rock ( 4 ) ; Dylan ( 6 ) ; pen ( 8 ) ; box ( 11 ) ; nmod . beside ( 1 , 4 ) AND pass ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A driver beside the bed smiled ., actual:  driver ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: driver ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A scientist on the desk admired the cake beside the chair ., actual:  scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A bear beside a chair napped ., actual:  bear ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND nap ( 5 ) AND agent ( 5 , 4 ), expected: bear ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND nap ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A horse on the cake investigated the melon on a box ., actual:  horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The monster beside a road smiled ., actual:  * monster ( 1 ) ; road ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: * monster ( 1 ) ; road ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl beside the table dusted the baby ., actual:  * girl ( 1 ) ; * table ( 4 ) ; * baby ( 7 ) ; nmod . beside ( 1 , 4 ) AND dust ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: * girl ( 1 ) ; * table ( 4 ) ; * baby ( 7 ) ; nmod . beside ( 1 , 4 ) AND dust ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl in the house liked a cake beside a bed ., actual:  * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in the house forwarded Victoria a gumball in the shoe ., actual:  girl ( 1 ) ; * house ( 4 ) ; Victoria ( 6 ) ; gumball ( 8 ) ; * shoe ( 11 ) ; nmod . in ( 1 , 4 ) AND forward ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . in ( 8 , 11 ), expected: girl ( 1 ) ; * house ( 4 ) ; Victoria ( 6 ) ; gumball ( 8 ) ; * shoe ( 11 ) ; nmod . in ( 1 , 4 ) AND forward ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . in ( 8 , 11 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A boy in the trailer poked the girl beside a table ., actual:  boy ( 1 ) ; * trailer ( 4 ) ; * girl ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: boy ( 1 ) ; * trailer ( 4 ) ; * girl ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The champion beside a table liked a cake on the computer ., actual:  * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != table ( 7 ) (actual)
mismatched part: table ( 7 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The boy in the vase sent the cake on a table to a cat ., actual:  * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 ), expected: * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The puppy on the seat poked the boy ., actual:  * puppy ( 1 ) ; * seat ( 4 ) ; * boy ( 7 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: * puppy ( 1 ) ; * seat ( 4 ) ; * boy ( 7 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: house ( 4 ) (expected) != agent ( 9 , 7 ) (actual)
mismatched part: nmod . beside ( 1 , 4 ) (expected) != house ( 4 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . beside ( 1 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The child on the pad ate the cat ., actual:  * child ( 1 ) ; * pad ( 4 ) ; * cat ( 7 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: * child ( 1 ) ; * pad ( 4 ) ; * cat ( 7 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A student in a pot liked the girl on a chair ., actual:  student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A cat on a sofa slept ., actual:  cat ( 1 ) ; sofa ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: cat ( 1 ) ; sofa ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A baby on the chair saw the bear ., actual:  baby ( 1 ) ; * chair ( 4 ) ; * bear ( 7 ) ; nmod . on ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: baby ( 1 ) ; * chair ( 4 ) ; * bear ( 7 ) ; nmod . on ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The frog on a mattress ate the radio on the bike ., actual:  * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The penguin in the drawer rolled the donut beside the computer ., actual:  * penguin ( 1 ) ; * drawer ( 4 ) ; * donut ( 7 ) ; * computer ( 10 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: * penguin ( 1 ) ; * drawer ( 4 ) ; * donut ( 7 ) ; * computer ( 10 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A deer beside the house slept ., actual:  deer ( 1 ) ; * house ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: deer ( 1 ) ; * house ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The cat in a house adored the donut on a stage ., actual:  * cat ( 1 ) ; house ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * cat ( 1 ) ; house ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
example agent error substitute nmod instead - input: The dog on the platter beside a stage slept ., actual:  * dog ( 1 ) ; * platter ( 4 ) ; stage ( 7 ) ; nmod . on ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND sleep ( 8 ) AND agent ( 8 , 7 ), expected: * dog ( 1 ) ; * platter ( 4 ) ; stage ( 7 ) ; nmod . on ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND sleep ( 8 ) AND agent ( 8 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: theme ( 8 , 9 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl beside a table slept ., actual:  * girl ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: * girl ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The horse on the stack loaned the lollipop on a table to Isaac ., actual:  * horse ( 1 ) ; * stack ( 4 ) ; * lollipop ( 7 ) ; table ( 10 ) ; Isaac ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 ), expected: * horse ( 1 ) ; * stack ( 4 ) ; * lollipop ( 7 ) ; table ( 10 ) ; Isaac ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The cat on the table awarded a cake on the stand to Oliver ., actual:  * cat ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * stand ( 10 ) ; Oliver ( 12 ) ; nmod . on ( 1 , 4 ) AND award ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 ), expected: * cat ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * stand ( 10 ) ; Oliver ( 12 ) ; nmod . on ( 1 , 4 ) AND award ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The teacher on the table gave Liam a cake on the tripod ., actual:  * teacher ( 1 ) ; * table ( 4 ) ; Liam ( 6 ) ; cake ( 8 ) ; * tripod ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 ), expected: * teacher ( 1 ) ; * table ( 4 ) ; Liam ( 6 ) ; cake ( 8 ) ; * tripod ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on a rock smiled ., actual:  girl ( 1 ) ; rock ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; rock ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl beside the stage found the banana in a bucket ., actual:  * girl ( 1 ) ; * stage ( 4 ) ; * banana ( 7 ) ; bucket ( 10 ) ; nmod . beside ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: * girl ( 1 ) ; * stage ( 4 ) ; * banana ( 7 ) ; bucket ( 10 ) ; nmod . beside ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The fish beside the seat offered the hamburger beside a key to a frog ., actual:  * fish ( 1 ) ; * seat ( 4 ) ; * hamburger ( 7 ) ; key ( 10 ) ; frog ( 13 ) ; nmod . beside ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 ), expected: * fish ( 1 ) ; * seat ( 4 ) ; * hamburger ( 7 ) ; key ( 10 ) ; frog ( 13 ) ; nmod . beside ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 1 ) (expected) != theme ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The frog on the table gave a cake beside the bottle to James ., actual:  * frog ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * bottle ( 10 ) ; James ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 ), expected: * frog ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * bottle ( 10 ) ; James ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: theme ( 6 , 1 ) (expected) != theme ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A boy on a plate sketched a chicken ., actual:  boy ( 1 ) ; plate ( 4 ) ; chicken ( 7 ) ; nmod . on ( 1 , 4 ) AND sketch ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: boy ( 1 ) ; plate ( 4 ) ; chicken ( 7 ) ; nmod . on ( 1 , 4 ) AND sketch ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A boy in the house lended the mouse the cake beside a seat ., actual:  boy ( 1 ) ; * house ( 4 ) ; * mouse ( 7 ) ; * cake ( 9 ) ; seat ( 12 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 ), expected: boy ( 1 ) ; * house ( 4 ) ; * mouse ( 7 ) ; * cake ( 9 ) ; seat ( 12 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 7 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
example agent error substitute nmod instead - input: The director on a bed on the machine lended a farmer the sandwich ., actual:  * director ( 1 ) ; bed ( 4 ) ; * machine ( 7 ) ; farmer ( 10 ) ; * sandwich ( 12 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND lend ( 8 ) AND agent ( 8 , 7 ) AND recipient ( 8 , 10 ) AND theme ( 8 , 12 ), expected: * director ( 1 ) ; bed ( 4 ) ; * machine ( 7 ) ; farmer ( 10 ) ; * sandwich ( 12 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND lend ( 8 ) AND agent ( 8 , 1 ) AND recipient ( 8 , 10 ) AND theme ( 8 , 12 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The frog in a house slept ., actual:  * frog ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: * frog ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The dog on the stage ate the boy on a seat ., actual:  * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . in ( 7 , 9 ) (actual)
mismatched part: theme ( 8 , 9 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . in ( 7 , 9 ) (actual)
mismatched part: theme ( 8 , 9 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The baby in the house promised the donut to the cat ., actual:  * baby ( 1 ) ; * house ( 4 ) ; * donut ( 7 ) ; * cat ( 10 ) ; nmod . in ( 1 , 4 ) AND promise ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 10 ), expected: * baby ( 1 ) ; * house ( 4 ) ; * donut ( 7 ) ; * cat ( 10 ) ; nmod . in ( 1 , 4 ) AND promise ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A bird on a train liked a cake beside a box ., actual:  bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl on a booklet walked ., actual:  * girl ( 1 ) ; booklet ( 4 ) ; nmod . on ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 4 ), expected: * girl ( 1 ) ; booklet ( 4 ) ; nmod . on ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The cat on a boat gave the box on a table to a boy ., actual:  * cat ( 1 ) ; boat ( 4 ) ; * box ( 7 ) ; table ( 10 ) ; boy ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 ), expected: * cat ( 1 ) ; boat ( 4 ) ; * box ( 7 ) ; table ( 10 ) ; boy ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in a room sent a frog a cake beside the pillar ., actual:  girl ( 1 ) ; room ( 4 ) ; frog ( 7 ) ; cake ( 9 ) ; * pillar ( 12 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 ), expected: girl ( 1 ) ; room ( 4 ) ; frog ( 7 ) ; cake ( 9 ) ; * pillar ( 12 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl on a tree offered the boy the banana beside a table ., actual:  * girl ( 1 ) ; tree ( 4 ) ; * boy ( 7 ) ; * banana ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 ), expected: * girl ( 1 ) ; tree ( 4 ) ; * boy ( 7 ) ; * banana ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The horse on a bed slept ., actual:  * horse ( 1 ) ; bed ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: * horse ( 1 ) ; bed ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The prince in a bin smiled ., actual:  * prince ( 1 ) ; bin ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: * prince ( 1 ) ; bin ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl beside a stage lended the cake in the house to Liam ., actual:  * girl ( 1 ) ; stage ( 4 ) ; * cake ( 7 ) ; * house ( 10 ) ; Liam ( 12 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . in ( 7 , 10 ), expected: * girl ( 1 ) ; stage ( 4 ) ; * cake ( 7 ) ; * house ( 10 ) ; Liam ( 12 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 4 , 7 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != see ( 8 ) (actual)
mismatched part: see ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A bear on the seat discovered a boy beside a stage ., actual:  bear ( 1 ) ; * seat ( 4 ) ; boy ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND discover ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: bear ( 1 ) ; * seat ( 4 ) ; boy ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND discover ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The turkey in the storage held a cake beside a table ., actual:  * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl in a box liked the donut beside a stage ., actual:  * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The teacher in the trap slept ., actual:  * teacher ( 1 ) ; * trap ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: * teacher ( 1 ) ; * trap ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != room ( 4 ) (actual)
mismatched part: room ( 4 ) (expected) != smile ( 8 ) (actual)
mismatched part: smile ( 8 ) (expected) != stage ( 7 ) (actual)
mismatched part: stage ( 7 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != cake ( 7 ) (actual)
mismatched part: cake ( 7 ) (expected) != float ( 5 ) (actual)
mismatched part: float ( 5 ) (expected) != frog ( 1 ) (actual)
mismatched part: frog ( 1 ) (expected) != nmod . beside ( 1 , 4 ) (actual)
mismatched part: nmod . beside ( 1 , 4 ) (expected) != nmod . in ( 7 , 10 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The chicken on the table poked the child in a cup ., actual:  * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
example agent error substitute nmod instead - input: A girl on the corpse in a glass admired a teacher ., actual:  girl ( 1 ) ; * corpse ( 4 ) ; glass ( 7 ) ; teacher ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND admire ( 8 ) AND agent ( 8 , 7 ) AND theme ( 8 , 10 ), expected: girl ( 1 ) ; * corpse ( 4 ) ; glass ( 7 ) ; teacher ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND admire ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in a envelope sold Liam the cake beside the computer ., actual:  girl ( 1 ) ; envelope ( 4 ) ; Liam ( 6 ) ; * cake ( 8 ) ; * computer ( 11 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 ), expected: girl ( 1 ) ; envelope ( 4 ) ; Liam ( 6 ) ; * cake ( 8 ) ; * computer ( 11 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 10 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl beside the table gave a mouse a mirror in the saucepan ., actual:  girl ( 1 ) ; * table ( 4 ) ; mouse ( 7 ) ; mirror ( 9 ) ; * saucepan ( 12 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . in ( 9 , 12 ), expected: girl ( 1 ) ; * table ( 4 ) ; mouse ( 7 ) ; mirror ( 9 ) ; * saucepan ( 12 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . in ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The monkey on the futon gave the cat a pretzel ., actual:  * monkey ( 1 ) ; * futon ( 4 ) ; * cat ( 7 ) ; pretzel ( 9 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ), expected: * monkey ( 1 ) ; * futon ( 4 ) ; * cat ( 7 ) ; pretzel ( 9 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A boy in the haystack slept ., actual:  boy ( 1 ) ; * haystack ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: boy ( 1 ) ; * haystack ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A dog in the wardrobe smiled ., actual:  dog ( 1 ) ; * wardrobe ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: dog ( 1 ) ; * wardrobe ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != game ( 7 ) (actual)
mismatched part: game ( 7 ) (expected) != girl ( 1 ) (actual)
mismatched part: girl ( 1 ) (expected) != nmod . beside ( 1 , 4 ) (actual)
mismatched part: nmod . beside ( 1 , 4 ) (expected) != nmod . on ( 7 , 10 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on the table ate the ball in a cafe ., actual:  girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl in the taxi slept ., actual:  * girl ( 1 ) ; * taxi ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: * girl ( 1 ) ; * taxi ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A child on a table gave Scarlett a balloon beside a lemon ., actual:  child ( 1 ) ; table ( 4 ) ; Scarlett ( 6 ) ; balloon ( 8 ) ; lemon ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 ), expected: child ( 1 ) ; table ( 4 ) ; Scarlett ( 6 ) ; balloon ( 8 ) ; lemon ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 7 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The dog on a chair ate a jigsaw on the paper ., actual:  * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on a table smiled ., actual:  girl ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A deer beside the table gave Emma a sweetcorn in the garden ., actual:  deer ( 1 ) ; * table ( 4 ) ; Emma ( 6 ) ; sweetcorn ( 8 ) ; * garden ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . in ( 8 , 11 ), expected: deer ( 1 ) ; * table ( 4 ) ; Emma ( 6 ) ; sweetcorn ( 8 ) ; * garden ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . in ( 8 , 11 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A tiger on a bible slept ., actual:  tiger ( 1 ) ; bible ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: tiger ( 1 ) ; bible ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl beside the road cried ., actual:  * girl ( 1 ) ; * road ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 ), expected: * girl ( 1 ) ; * road ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 7 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A child beside a stage gave Emma a donut beside the house ., actual:  child ( 1 ) ; stage ( 4 ) ; Emma ( 6 ) ; donut ( 8 ) ; * house ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 ), expected: child ( 1 ) ; stage ( 4 ) ; Emma ( 6 ) ; donut ( 8 ) ; * house ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A professor beside the bed smiled ., actual:  professor ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: professor ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The cat on the canvas gave the glue beside a table to a girl ., actual:  * cat ( 1 ) ; * canvas ( 4 ) ; * glue ( 7 ) ; table ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 ), expected: * cat ( 1 ) ; * canvas ( 4 ) ; * glue ( 7 ) ; table ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl beside the table saw the cat in a car ., actual:  girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The cat on the tabletop sold the princess a cake beside a monkey ., actual:  * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 ), expected: * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl beside a sword ate a fruit in the house ., actual:  girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The citizen beside the duck adored the drink ., actual:  * citizen ( 1 ) ; * duck ( 4 ) ; * drink ( 7 ) ; nmod . beside ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: * citizen ( 1 ) ; * duck ( 4 ) ; * drink ( 7 ) ; nmod . beside ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: crumple ( 8 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: house ( 7 ) (expected) != crumple ( 8 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != house ( 7 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on the rock walked ., actual:  girl ( 1 ) ; * rock ( 4 ) ; nmod . on ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; * rock ( 4 ) ; nmod . on ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
example agent error substitute nmod instead - input: The girl in the house beside a cage dusted a ball ., actual:  * girl ( 1 ) ; * house ( 4 ) ; cage ( 7 ) ; ball ( 10 ) ; nmod . in ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND dust ( 8 ) AND agent ( 8 , 7 ) AND theme ( 8 , 10 ), expected: * girl ( 1 ) ; * house ( 4 ) ; cage ( 7 ) ; ball ( 10 ) ; nmod . in ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND dust ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The boy beside a bed gave Audrey a cake on the pedestal ., actual:  * boy ( 1 ) ; bed ( 4 ) ; Audrey ( 6 ) ; cake ( 8 ) ; * pedestal ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 ), expected: * boy ( 1 ) ; bed ( 4 ) ; Audrey ( 6 ) ; cake ( 8 ) ; * pedestal ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl on a table liked a journalist on a stage ., actual:  * girl ( 1 ) ; table ( 4 ) ; journalist ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * girl ( 1 ) ; table ( 4 ) ; journalist ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in the room cried ., actual:  girl ( 1 ) ; * room ( 4 ) ; nmod . in ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; * room ( 4 ) ; nmod . in ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The mouse in the crate liked a professor on the road ., actual:  * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
example agent error substitute nmod instead - input: A creature in a house beside the book slept ., actual:  creature ( 1 ) ; house ( 4 ) ; * book ( 7 ) ; nmod . in ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND sleep ( 8 ) AND agent ( 8 , 7 ), expected: creature ( 1 ) ; house ( 4 ) ; * book ( 7 ) ; nmod . in ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND sleep ( 8 ) AND agent ( 8 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl beside the chair smiled ., actual:  * girl ( 1 ) ; * chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: * girl ( 1 ) ; * chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl on a tray served the cat a cake ., actual:  * girl ( 1 ) ; tray ( 4 ) ; * cat ( 7 ) ; cake ( 9 ) ; nmod . on ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ), expected: * girl ( 1 ) ; tray ( 4 ) ; * cat ( 7 ) ; cake ( 9 ) ; nmod . on ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: cot ( 7 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != cot ( 7 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in the car liked a bottle in the house ., actual:  girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 8 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != road ( 4 ) (actual)
mismatched part: road ( 4 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in a house sold the cake beside the stage to Emma ., actual:  girl ( 1 ) ; house ( 4 ) ; * cake ( 7 ) ; * stage ( 10 ) ; Emma ( 12 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 ), expected: girl ( 1 ) ; house ( 4 ) ; * cake ( 7 ) ; * stage ( 10 ) ; Emma ( 12 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The resident on a computer gave a cake beside a helicopter to the girl ., actual:  * resident ( 1 ) ; computer ( 4 ) ; cake ( 7 ) ; helicopter ( 10 ) ; * girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 ), expected: * resident ( 1 ) ; computer ( 4 ) ; cake ( 7 ) ; helicopter ( 10 ) ; * girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in the swamp painted the glue ., actual:  girl ( 1 ) ; * swamp ( 4 ) ; * glue ( 7 ) ; nmod . in ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ), expected: girl ( 1 ) ; * swamp ( 4 ) ; * glue ( 7 ) ; nmod . in ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl in a glass served the boy a balloon ., actual:  * girl ( 1 ) ; glass ( 4 ) ; * boy ( 7 ) ; balloon ( 9 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ), expected: * girl ( 1 ) ; glass ( 4 ) ; * boy ( 7 ) ; balloon ( 9 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in the house gave the host a bat beside the pepper ., actual:  girl ( 1 ) ; * house ( 4 ) ; * host ( 7 ) ; bat ( 9 ) ; * pepper ( 12 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 ), expected: girl ( 1 ) ; * house ( 4 ) ; * host ( 7 ) ; bat ( 9 ) ; * pepper ( 12 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: house ( 7 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . beside ( 1 , 4 ) (expected) != house ( 7 ) (actual)
mismatched part: nmod . in ( 4 , 7 ) (expected) != nmod . beside ( 1 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl in a container gave the brush in the cart to a duke ., actual:  girl ( 1 ) ; container ( 4 ) ; * brush ( 7 ) ; * cart ( 10 ) ; duke ( 13 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 ), expected: girl ( 1 ) ; container ( 4 ) ; * brush ( 7 ) ; * cart ( 10 ) ; duke ( 13 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: nmod . on ( 1 , 4 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != nmod . on ( 1 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on the surface cried ., actual:  girl ( 1 ) ; * surface ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; * surface ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )
mismatched part: freeze ( 8 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: girl ( 1 ) (expected) != freeze ( 8 ) (actual)
mismatched part: house ( 4 ) (expected) != girl ( 1 ) (actual)
mismatched part: nmod . in ( 1 , 4 ) (expected) != house ( 4 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != nmod . in ( 1 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != bunny ( 1 ) (actual)
mismatched part: bunny ( 1 ) (expected) != cake ( 7 ) (actual)
mismatched part: cake ( 7 ) (expected) != chair ( 4 ) (actual)
mismatched part: chair ( 4 ) (expected) != loan ( 5 ) (actual)
mismatched part: loan ( 5 ) (expected) != nmod . beside ( 1 , 4 ) (actual)
mismatched part: nmod . beside ( 1 , 4 ) (expected) != nmod . in ( 7 , 9 ) (actual)
mismatched part: recipient ( 5 , 9 ) (expected) != recipient ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The boy beside the whale slept ., actual:  * boy ( 1 ) ; * whale ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: * boy ( 1 ) ; * whale ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The boy on the table laughed ., actual:  * boy ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 4 ), expected: * boy ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl beside a table rented Camila the cake beside the bed ., actual:  * girl ( 1 ) ; table ( 4 ) ; Camila ( 6 ) ; * cake ( 8 ) ; * bed ( 11 ) ; nmod . beside ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 ), expected: * girl ( 1 ) ; table ( 4 ) ; Camila ( 6 ) ; * cake ( 8 ) ; * bed ( 11 ) ; nmod . beside ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The consumer on the bed gave Evelyn a molecule beside the duck ., actual:  * consumer ( 1 ) ; * bed ( 4 ) ; Evelyn ( 6 ) ; molecule ( 8 ) ; * duck ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 ), expected: * consumer ( 1 ) ; * bed ( 4 ) ; Evelyn ( 6 ) ; molecule ( 8 ) ; * duck ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The lion beside a piano gave the girl the donut ., actual:  * lion ( 1 ) ; piano ( 4 ) ; * girl ( 7 ) ; * donut ( 9 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ), expected: * lion ( 1 ) ; piano ( 4 ) ; * girl ( 7 ) ; * donut ( 9 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A child on the bed poked a brush in the car ., actual:  child ( 1 ) ; * bed ( 4 ) ; brush ( 7 ) ; * car ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: child ( 1 ) ; * bed ( 4 ) ; brush ( 7 ) ; * car ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl in the cart drew Emma ., actual:  * girl ( 1 ) ; * cart ( 4 ) ; Emma ( 6 ) ; nmod . in ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 6 ), expected: * girl ( 1 ) ; * cart ( 4 ) ; Emma ( 6 ) ; nmod . in ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 6 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The child beside a chair ate the rose beside a shoe ., actual:  * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 ), expected: * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The chicken on a table rented the bean on the log to a girl ., actual:  * chicken ( 1 ) ; table ( 4 ) ; * bean ( 7 ) ; * log ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 ), expected: * chicken ( 1 ) ; table ( 4 ) ; * bean ( 7 ) ; * log ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A pony on a crack fed the guitar beside a broker to the sailor ., actual:  pony ( 1 ) ; crack ( 4 ) ; * guitar ( 7 ) ; broker ( 10 ) ; * sailor ( 13 ) ; nmod . on ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 ), expected: pony ( 1 ) ; crack ( 4 ) ; * guitar ( 7 ) ; broker ( 10 ) ; * sailor ( 13 ) ; nmod . on ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A teacher on the table cried ., actual:  teacher ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 ), expected: teacher ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A teacher beside a table danced ., actual:  teacher ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 4 ), expected: teacher ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 9 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl in the tin fed the cake beside a clock to Liam ., actual:  * girl ( 1 ) ; * tin ( 4 ) ; * cake ( 7 ) ; clock ( 10 ) ; Liam ( 12 ) ; nmod . in ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 ), expected: * girl ( 1 ) ; * tin ( 4 ) ; * cake ( 7 ) ; clock ( 10 ) ; Liam ( 12 ) ; nmod . in ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The kid on a trampoline slept ., actual:  * kid ( 1 ) ; trampoline ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: * kid ( 1 ) ; trampoline ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl beside a bed crumpled the goose in the basin ., actual:  * girl ( 1 ) ; bed ( 4 ) ; * goose ( 7 ) ; * basin ( 10 ) ; nmod . beside ( 1 , 4 ) AND crumple ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: * girl ( 1 ) ; bed ( 4 ) ; * goose ( 7 ) ; * basin ( 10 ) ; nmod . beside ( 1 , 4 ) AND crumple ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The boy on the stage offered the girl a cookie ., actual:  * boy ( 1 ) ; * stage ( 4 ) ; * girl ( 7 ) ; cookie ( 9 ) ; nmod . on ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ), expected: * boy ( 1 ) ; * stage ( 4 ) ; * girl ( 7 ) ; cookie ( 9 ) ; nmod . on ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A child in a car smiled ., actual:  child ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: child ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl in the tub lended Emma the cake ., actual:  * girl ( 1 ) ; * tub ( 4 ) ; Emma ( 6 ) ; * cake ( 8 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ), expected: * girl ( 1 ) ; * tub ( 4 ) ; Emma ( 6 ) ; * cake ( 8 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != cake ( 7 ) (actual)
mismatched part: cake ( 7 ) (expected) != driver ( 10 ) (actual)
mismatched part: driver ( 10 ) (expected) != nmod . in ( 7 , 10 ) (actual)
mismatched part: recipient ( 5 , 10 ) (expected) != recipient ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The girl on the table collapsed the rose on the trampoline ., actual:  * girl ( 1 ) ; * table ( 4 ) ; * rose ( 7 ) ; * trampoline ( 10 ) ; nmod . on ( 1 , 4 ) AND collapse ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: * girl ( 1 ) ; * table ( 4 ) ; * rose ( 7 ) ; * trampoline ( 10 ) ; nmod . on ( 1 , 4 ) AND collapse ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A baby in the car offered a cake on a bible to Charlotte ., actual:  baby ( 1 ) ; * car ( 4 ) ; cake ( 7 ) ; bible ( 10 ) ; Charlotte ( 12 ) ; nmod . in ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 ), expected: baby ( 1 ) ; * car ( 4 ) ; cake ( 7 ) ; bible ( 10 ) ; Charlotte ( 12 ) ; nmod . in ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl beside a stage cooked a cake in the shoe ., actual:  girl ( 1 ) ; stage ( 4 ) ; cake ( 7 ) ; * shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 ), expected: girl ( 1 ) ; stage ( 4 ) ; cake ( 7 ) ; * shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
example agent error substitute nmod instead - input: The lion beside a stage beside the bench gave a girl the pillow ., actual:  * lion ( 1 ) ; stage ( 4 ) ; * bench ( 7 ) ; girl ( 10 ) ; * pillow ( 12 ) ; nmod . beside ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND give ( 8 ) AND agent ( 8 , 7 ) AND recipient ( 8 , 10 ) AND theme ( 8 , 12 ), expected: * lion ( 1 ) ; stage ( 4 ) ; * bench ( 7 ) ; girl ( 10 ) ; * pillow ( 12 ) ; nmod . beside ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND give ( 8 ) AND agent ( 8 , 1 ) AND recipient ( 8 , 10 ) AND theme ( 8 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The mouse on a table gave the donut in the nest to a cat ., actual:  * mouse ( 1 ) ; table ( 4 ) ; * donut ( 7 ) ; * nest ( 10 ) ; cat ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 ), expected: * mouse ( 1 ) ; table ( 4 ) ; * donut ( 7 ) ; * nest ( 10 ) ; cat ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on the chair slept ., actual:  girl ( 1 ) ; * chair ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: girl ( 1 ) ; * chair ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A boy beside a broker lended Emma the melon on the plate ., actual:  boy ( 1 ) ; broker ( 4 ) ; Emma ( 6 ) ; * melon ( 8 ) ; * plate ( 11 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 ), expected: boy ( 1 ) ; broker ( 4 ) ; Emma ( 6 ) ; * melon ( 8 ) ; * plate ( 11 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A fish on a leaflet loaned the cat the donut beside the stage ., actual:  fish ( 1 ) ; leaflet ( 4 ) ; * cat ( 7 ) ; * donut ( 9 ) ; * stage ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 ), expected: fish ( 1 ) ; leaflet ( 4 ) ; * cat ( 7 ) ; * donut ( 9 ) ; * stage ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A priest on the box admired a cake on the table ., actual:  priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 ), expected: priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The child beside the chair slept ., actual:  * child ( 1 ) ; * chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: * child ( 1 ) ; * chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 10 ) (actual)
mismatched part: theme ( 8 , 10 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 8 ) (actual)
mismatched part: theme ( 6 , 8 ) (expected) != theme ( 6 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A chicken in a car served a cat a box in the bun ., actual:  chicken ( 1 ) ; car ( 4 ) ; cat ( 7 ) ; box ( 9 ) ; * bun ( 12 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . in ( 9 , 12 ), expected: chicken ( 1 ) ; car ( 4 ) ; cat ( 7 ) ; box ( 9 ) ; * bun ( 12 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . in ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 7 ) (actual)
mismatched part: theme ( 5 , 7 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A girl on the dog handed a cat the raisin on a table ., actual:  girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 ), expected: girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The boy on a towel gave the frog the cake on a booklet ., actual:  * boy ( 1 ) ; towel ( 4 ) ; * frog ( 7 ) ; * cake ( 9 ) ; booklet ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 ), expected: * boy ( 1 ) ; towel ( 4 ) ; * frog ( 7 ) ; * cake ( 9 ) ; booklet ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The cat beside the stool gave a cake in a cup to a customer ., actual:  * cat ( 1 ) ; * stool ( 4 ) ; cake ( 7 ) ; cup ( 10 ) ; customer ( 13 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 ), expected: * cat ( 1 ) ; * stool ( 4 ) ; cake ( 7 ) ; cup ( 10 ) ; customer ( 13 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A cow in the puddle slept ., actual:  cow ( 1 ) ; * puddle ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 ), expected: cow ( 1 ) ; * puddle ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 7 ) (actual)
mismatched part: nmod . on ( 4 , 7 ) (expected) != stage ( 7 ) (actual)
mismatched part: stage ( 7 ) (expected) != stutter ( 8 ) (actual)
mismatched part: stutter ( 8 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A director in a house walked ., actual:  director ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 4 ), expected: director ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 1 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: recipient ( 9 , 1 ) (expected) != recipient ( 9 , 7 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: A host beside a table smiled ., actual:  host ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 ), expected: host ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The cat on the hanger rented the box to a child ., actual:  * cat ( 1 ) ; * hanger ( 4 ) ; * box ( 7 ) ; child ( 10 ) ; nmod . on ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 10 ), expected: * cat ( 1 ) ; * hanger ( 4 ) ; * box ( 7 ) ; child ( 10 ) ; nmod . on ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 10 )
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 6 ) (actual)
mismatched part: theme ( 5 , 6 ) (expected) != theme ( 5 , 1 ) (actual)
mismatched part: agent ( 8 , 1 ) (expected) != agent ( 8 , 9 ) (actual)
mismatched part: theme ( 8 , 9 ) (expected) != theme ( 8 , 1 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The boy beside a chair danced ., actual:  * boy ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 4 ), expected: * boy ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 1 )
mismatched part: recipient ( 6 , 1 ) (expected) != recipient ( 6 , 4 ) (actual)
mismatched part: agent ( 5 , 1 ) (expected) != agent ( 5 , 4 ) (actual)
example agent error substitute nmod instead - input: The baby on the stage gave the girl a cake ., actual:  * baby ( 1 ) ; * stage ( 4 ) ; * girl ( 7 ) ; cake ( 9 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ), expected: * baby ( 1 ) ; * stage ( 4 ) ; * girl ( 7 ) ; cake ( 9 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
defaultdict(<class 'int'>, {'multiple,agent=left,theme=None': 17, 'agent=left,theme=None,part=agent': 73, 'agent=right or middle,theme=middle,part=recipient': 34, 'diff_length_skip,agent=left,theme=None': 91, 'agent=left,theme=right,part=agent': 44, 'diff_length_skip,agent=right or middle,theme=None': 74, 'diff_length_skip,agent=right or middle,theme=left': 83, 'multiple,agent=left,theme=middle': 29, 'multiple,agent=left,theme=right': 121, 'more_than_one_verb_not_v_inf_skip': 73, 'multiple,agent=right or middle,theme=None': 20, 'agent=right or middle,theme=None,part=recipient': 68, 'diff_length_skip,agent=left,theme=right': 10, 'cp_skip': 77, 'agent=left,theme=middle,part=agent': 19, 'multiple,agent=right or middle,theme=left': 1, 'agent=right or middle,theme=left,part=theme': 2, 'agent=right or middle,theme=None,part=theme': 1})
In [ ]:
def get_clopper_pearson_confidence_interval(n, k):
  alpha = 0.05
  from scipy.stats import beta
  # Reference: https://en.wikipedia.org/w/index.php?title=Binomial_proportion_confidence_interval&oldid=1252517214#Clopper%E2%80%93Pearson_interval
  # Wikipedia's underlying reference for the beta distribution form https://arxiv.org/abs/1303.1288 equation 4 is also useful,
  cp_confidence_interval = beta.ppf([alpha/2.0, 1-alpha/2.0], [k, k+1],[n-k + 1, n-k])
  # Below https://arxiv.org/abs/1303.1288 eqn 4 they discuss the n == k and k == 0 cases,
  # which justify the following assignments below and the use of alpha/2.0 (two-tailed test adjustment) above even when we find that k==n or k==0.
  # they give a closed form for these special cases but one can check it is what beta.ppf (which covers all cases) will return there as well.
  if n == k:
    cp_confidence_interval[1] = 1.0
  if k == 0:
    cp_confidence_interval[0] = 0.0
  return cp_confidence_interval

Results for single-point logical form errors when the agent-is-left-of-the-verb, single verb (v_inf excl, can be incl) no complement phrases (n=767 out of 8077 total errors; 9.5% of raw errors, 15.6% of 4907 agent-is-left-of-the-verb errors; restricted to simplify analysis) (ANALYSIS IN PROGRESS)¶

Note that our hypothesis does make similar predictions for theme left of verb in this obj_pp_to_subj_pp split (identical analysis with agent in place of theme), but we start just analyzing the agent errors as they are more common.

agent left of verb sentences (regardless whether single verb or not)

In [ ]:
agent_left_of_verb_errors_total
Out[ ]:
4907

single point errors in agent left of verb, single verb sentences:

In [ ]:
total_agent_left_single_point_error_count
Out[ ]:
767
In [ ]:
total_agent_left_single_point_error_in_agent_count
Out[ ]:
765
In [ ]:
total_agent_left_single_point_error_count / agent_left_of_verb_errors_total
Out[ ]:
0.15630731607907072

Summary numbers¶

In [ ]:
ci = get_clopper_pearson_confidence_interval(total_agent_left_single_point_error_count, total_agent_left_single_point_error_in_agent_count)
print(f"Across all n={len(fraction_in_expected_part_list)} Wu et al 2023 models, " +
 f"{total_agent_left_single_point_error_in_agent_count} out of {total_agent_left_single_point_error_count} " +
 f"({(total_agent_left_single_point_error_in_agent_count/total_agent_left_single_point_error_count)*100:0.2f}%; 95% confidence interval {ci[0]*100:0.2f} to {ci[1]*100:0.2f}%) " +
 f"single point errors in logical forms when the agent was on the left were in the agent part")
Across all n=10 Wu et al 2023 models, 765 out of 767 (99.74%; 95% confidence interval 99.06 to 99.97%) single point errors in logical forms when the agent was on the left were in the agent part
In [ ]:
print(f"On a per model basis, the fraction of agent-left single point errors where the agent was broken were {fraction_in_expected_part_list}\nAverage: {np.array(fraction_in_expected_part_list).mean()}")
On a per model basis, the fraction of agent-left single point errors where the agent was broken were [0.9850746268656716, 1.0, 1.0, 1.0, 1.0, 0.9896907216494846, 1.0, 1.0, 1.0, 1.0]
Average: 0.9974765348515156
In [ ]:
ci = get_clopper_pearson_confidence_interval(len(example_agent_left_single_point_mismatch_nmod_substitution_all) + len(example_agent_left_single_point_mismatch_not_nmod_substitution_all), len(example_agent_left_single_point_mismatch_nmod_substitution_all))

print(f"Across all n={len(fraction_in_expected_part_list)} Wu et al 2023 models, " +
 f"{len(example_agent_left_single_point_mismatch_nmod_substitution_all)} out of {len(example_agent_left_single_point_mismatch_nmod_substitution_all) + len(example_agent_left_single_point_mismatch_not_nmod_substitution_all)} " +
  f"({len(example_agent_left_single_point_mismatch_nmod_substitution_all)/(len(example_agent_left_single_point_mismatch_nmod_substitution_all) + len(example_agent_left_single_point_mismatch_not_nmod_substitution_all))*100:0.2f}%; 95% confidence interval {ci[0]*100:0.2f} to {ci[1]*100:0.2f}%) " +
  "of all single point errors in logical forms where the agent was on the left, modified by a prepositional phrase, and the error was in the agent part\nthe error was as predicted by our hypothesis (agent idx is mistakenly set to the prepositional noun index)\n\n" +
  "(see further down in notebook for a printout of all such examples for verifying manually if desired)"
 )
Across all n=10 Wu et al 2023 models, 740 out of 765 (96.73%; 95% confidence interval 95.21 to 97.87%) of all single point errors in logical forms where the agent was on the left, modified by a prepositional phrase, and the error was in the agent part
the error was as predicted by our hypothesis (agent idx is mistakenly set to the prepositional noun index)

(see further down in notebook for a printout of all such examples for verifying manually if desired)

On a per model basis, the fraction of these agent-left single point errors where the agent was broken that are broken in the predicted way (substitute the prepositional phrase index), averages at 97% (notably, one outlier had only 76% meeting the prediction):

In [ ]:
import math
d = np.array(fraction_agent_errors_where_it_was_nmod_substitution_list)
print(f"Wu et al 2023 baseline Encoder-Decoder Transformer model\n" +
      f"average % of all agent-left sentences with single point error with agent modified by prepositional phrase\n" +
      f"where agent idx in LF is substituted by prepositional noun idx (as predicted): {d.mean()*100:0.2f}%, stderr={d.std()/math.sqrt(len(d))*100:0.2f}% (n={len(d)})\n\nfraction as expected for each training+evaluation run (separate model): {fraction_agent_errors_where_it_was_nmod_substitution_list}")
Wu et al 2023 baseline Encoder-Decoder Transformer model
average % of all agent-left sentences with single point error with agent modified by prepositional phrase
where agent idx in LF is substituted by prepositional noun idx (as predicted): 97.07%, stderr=2.23% (n=10)

fraction as expected for each training+evaluation run (separate model): [0.9696969696969697, 0.7613636363636364, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.9761904761904762, 1.0]

Though that already looks like almost all of the single point errors in the logical form when the agent is on the left and modified by prepositional phrase match our prediction,

we can begin the deeper analysis with the simple subject-verb-object case where it is agent-verb-theme, where we also see very high agreement with our prediction (almost 97.10%), but the numbers are fewer so we can actually confirm every single example by eye if we want to:

In [ ]:
ci = get_clopper_pearson_confidence_interval(total_single_point_agent_errors_theme_right_count, total_single_point_agent_errors_theme_right_count_where_substitution_by_nmod)

print(f"{total_single_point_agent_errors_theme_right_count_where_substitution_by_nmod} out of {total_single_point_agent_errors_theme_right_count} single point errors " +
 f"({total_single_point_agent_errors_theme_right_count_where_substitution_by_nmod/total_single_point_agent_errors_theme_right_count*100:0.2f}%; 95% confidence interval {ci[0]*100:0.2f} to {ci[1]*100:0.2f}%) " +
 f"(combining all counts across n=10 models)\nwith nmod agent on left and theme on right are the predicted substitution of agent idx by the prepositional noun idx\n\n(see further down in notebook for a printout of all such examples ofr verifying manually if desired)")
234 out of 241 single point errors (97.10%; 95% confidence interval 94.11 to 98.82%) (combining all counts across n=10 models)
with nmod agent on left and theme on right are the predicted substitution of agent idx by the prepositional noun idx

(see further down in notebook for a printout of all such examples ofr verifying manually if desired)

If we look at the fraction by each separately trained-from-scratch Wu et al 2023 Encoder-Decoder Transformer, we see our hypothesis predicted the error successfully for all the agent-left , theme-right errors for 8 out of 10 Transformers, and for 2 out of 10 it predicted the vast majority, and on average we predict 97.9% (average across models):

In [ ]:
import math
d = np.array(fraction_agent_errors_where_it_was_nmod_substitution_theme_right_list)
print(f"Wu et al 2023 baseline Encoder-Decoder Transformer model\n" +
      f"average % of agent-verb-theme sentence single point LF errors with agent modified by prepositional phrase\n" +
      f"where agent idx in LF is substituted by prepositional noun idx: {d.mean()*100:0.2f}%, stderr={d.std()/math.sqrt(len(d))*100:0.2f}% (mean +/- std, n={len(d)})\n\n fraction as predicted for each training+evaluation run (separate models): {fraction_agent_errors_where_it_was_nmod_substitution_theme_right_list}")
Wu et al 2023 baseline Encoder-Decoder Transformer model
average % of agent-verb-theme sentence single point LF errors with agent modified by prepositional phrase
where agent idx in LF is substituted by prepositional noun idx: 97.85%, stderr=1.63% (mean +/- std, n=10)

 fraction as predicted for each training+evaluation run (separate models): [0.9565217391304348, 0.8285714285714286, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]

Here are the actual sentences with actual and predicted logical forms:

Example sentences for agent-left, theme-right single point mismatches (smaller set for verifying all by hand if desired)¶

Note sentences occur more than once and can occur in both "As predicted" and "Not as predicted" conditions because this is the combined output from n=10 separate Transformers with different behavior. Each sentence does not appear 10 times because we only consider logical form output with single point errors and also if they get it correct it would not be included in the analysis.

This is an analysis of the errors Wu et al 2023's baseline Encoder-Decoder Transformer makes on the obj_pp_to_subj_pp split from https://github.com/frankaging/ReCOGS/blob/1b6eca8ff4dca5fd2fb284a7d470998af5083beb/recogs_positional_index/gen.tsv (public data; was held out for evaluations of our RASP model , this analysis of the Wu et al baseline errors on it only done after the RASP evaluation on that split was completed).

Wu et al 2023 baseline model error AS predicted (234 out of 241; 97.1%)¶

In [ ]:
len(example_agent_left_theme_right_single_point_mismatch_nmod_substitution_all)
Out[ ]:
234
In [ ]:
for example in example_agent_left_theme_right_single_point_mismatch_nmod_substitution_all:
  print(example)
input: A girl on the stool on the table drew a frog .
actual:   girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 7 ) AND theme ( 8 , 10 )
expected: girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )

input: The professor beside a table appreciated the key in a room .
actual:   * professor ( 1 ) ; table ( 4 ) ; * key ( 7 ) ; room ( 10 ) ; nmod . beside ( 1 , 4 ) AND appreciate ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * professor ( 1 ) ; table ( 4 ) ; * key ( 7 ) ; room ( 10 ) ; nmod . beside ( 1 , 4 ) AND appreciate ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A scientist on the desk admired the cake beside the chair .
actual:   scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The girl in the house liked a cake beside a bed .
actual:   * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The champion beside a table liked a cake on the computer .
actual:   * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A student in a pot liked the girl on a chair .
actual:   student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The frog on a mattress ate the radio on the bike .
actual:   * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The cat in a house adored the donut on a stage .
actual:   * cat ( 1 ) ; house ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * cat ( 1 ) ; house ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The dog on the stage ate the boy on a seat .
actual:   * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A bird on a train liked a cake beside a box .
actual:   bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A bear on the seat discovered a boy beside a stage .
actual:   bear ( 1 ) ; * seat ( 4 ) ; boy ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND discover ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: bear ( 1 ) ; * seat ( 4 ) ; boy ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND discover ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The turkey in the storage held a cake beside a table .
actual:   * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The girl in a box liked the donut beside a stage .
actual:   * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The chicken on the table poked the child in a cup .
actual:   * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A girl on the table ate the ball in a cafe .
actual:   girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The dog on a chair ate a jigsaw on the paper .
actual:   * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl beside a sword ate a fruit in the house .
actual:   girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The girl on a table liked a journalist on a stage .
actual:   * girl ( 1 ) ; table ( 4 ) ; journalist ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * girl ( 1 ) ; table ( 4 ) ; journalist ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl in the car liked a bottle in the house .
actual:   girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A child on the bed poked a brush in the car .
actual:   child ( 1 ) ; * bed ( 4 ) ; brush ( 7 ) ; * car ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: child ( 1 ) ; * bed ( 4 ) ; brush ( 7 ) ; * car ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The child beside a chair ate the rose beside a shoe .
actual:   * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A priest on the box admired a cake on the table .
actual:   priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A bear in the car froze the key on the table .
actual:   bear ( 1 ) ; * car ( 4 ) ; * key ( 7 ) ; * table ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: bear ( 1 ) ; * car ( 4 ) ; * key ( 7 ) ; * table ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A baby in a garden called the raisin .
actual:   baby ( 1 ) ; garden ( 4 ) ; * raisin ( 7 ) ; nmod . in ( 1 , 4 ) AND call ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: baby ( 1 ) ; garden ( 4 ) ; * raisin ( 7 ) ; nmod . in ( 1 , 4 ) AND call ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A girl in the house knew a cake .
actual:   girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; nmod . in ( 1 , 4 ) AND know ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; nmod . in ( 1 , 4 ) AND know ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The professor beside a table appreciated the key in a room .
actual:   * professor ( 1 ) ; table ( 4 ) ; * key ( 7 ) ; room ( 10 ) ; nmod . beside ( 1 , 4 ) AND appreciate ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * professor ( 1 ) ; table ( 4 ) ; * key ( 7 ) ; room ( 10 ) ; nmod . beside ( 1 , 4 ) AND appreciate ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A cat on a bag cleaned a chemical in a house .
actual:   cat ( 1 ) ; bag ( 4 ) ; chemical ( 7 ) ; house ( 10 ) ; nmod . on ( 1 , 4 ) AND clean ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: cat ( 1 ) ; bag ( 4 ) ; chemical ( 7 ) ; house ( 10 ) ; nmod . on ( 1 , 4 ) AND clean ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A scientist on the desk admired the cake beside the chair .
actual:   scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A horse on the cake investigated the melon on a box .
actual:   horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The girl beside the table dusted the baby .
actual:   * girl ( 1 ) ; * table ( 4 ) ; * baby ( 7 ) ; nmod . beside ( 1 , 4 ) AND dust ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * girl ( 1 ) ; * table ( 4 ) ; * baby ( 7 ) ; nmod . beside ( 1 , 4 ) AND dust ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The girl in the house liked a cake beside a bed .
actual:   * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The champion beside a table liked a cake on the computer .
actual:   * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The frog on a mattress ate the radio on the bike .
actual:   * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The cat in a house adored the donut on a stage .
actual:   * cat ( 1 ) ; house ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * cat ( 1 ) ; house ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The girl beside the stage found the banana in a bucket .
actual:   * girl ( 1 ) ; * stage ( 4 ) ; * banana ( 7 ) ; bucket ( 10 ) ; nmod . beside ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * girl ( 1 ) ; * stage ( 4 ) ; * banana ( 7 ) ; bucket ( 10 ) ; nmod . beside ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A bird on a train liked a cake beside a box .
actual:   bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A cat on the bed decomposed the cake in the cylinder .
actual:   cat ( 1 ) ; * bed ( 4 ) ; * cake ( 7 ) ; * cylinder ( 10 ) ; nmod . on ( 1 , 4 ) AND decompose ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: cat ( 1 ) ; * bed ( 4 ) ; * cake ( 7 ) ; * cylinder ( 10 ) ; nmod . on ( 1 , 4 ) AND decompose ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The turkey in the storage held a cake beside a table .
actual:   * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The girl in a box liked the donut beside a stage .
actual:   * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A girl on the table ate the ball in a cafe .
actual:   girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The dog on a chair ate a jigsaw on the paper .
actual:   * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl beside the table saw the cat in a car .
actual:   girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A girl beside a sword ate a fruit in the house .
actual:   girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The citizen beside the duck adored the drink .
actual:   * citizen ( 1 ) ; * duck ( 4 ) ; * drink ( 7 ) ; nmod . beside ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * citizen ( 1 ) ; * duck ( 4 ) ; * drink ( 7 ) ; nmod . beside ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The girl in the house beside a cage dusted a ball .
actual:   * girl ( 1 ) ; * house ( 4 ) ; cage ( 7 ) ; ball ( 10 ) ; nmod . in ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND dust ( 8 ) AND agent ( 8 , 7 ) AND theme ( 8 , 10 )
expected: * girl ( 1 ) ; * house ( 4 ) ; cage ( 7 ) ; ball ( 10 ) ; nmod . in ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND dust ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )

input: A girl in the car liked a bottle in the house .
actual:   girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A child on the bed poked a brush in the car .
actual:   child ( 1 ) ; * bed ( 4 ) ; brush ( 7 ) ; * car ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: child ( 1 ) ; * bed ( 4 ) ; brush ( 7 ) ; * car ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The child beside a chair ate the rose beside a shoe .
actual:   * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A girl beside a stage cooked a cake in the shoe .
actual:   girl ( 1 ) ; stage ( 4 ) ; cake ( 7 ) ; * shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; stage ( 4 ) ; cake ( 7 ) ; * shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A buyer beside the table rolled the cake in the backpack .
actual:   buyer ( 1 ) ; * table ( 4 ) ; * cake ( 7 ) ; * backpack ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: buyer ( 1 ) ; * table ( 4 ) ; * cake ( 7 ) ; * backpack ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A priest on the box admired a cake on the table .
actual:   priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A bird on a train liked a cake beside a box .
actual:   bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The mouse in the crate liked a professor on the road .
actual:   * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The baby beside a valve painted the cake .
actual:   * baby ( 1 ) ; valve ( 4 ) ; * cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * baby ( 1 ) ; valve ( 4 ) ; * cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A baby in a garden called the raisin .
actual:   baby ( 1 ) ; garden ( 4 ) ; * raisin ( 7 ) ; nmod . in ( 1 , 4 ) AND call ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: baby ( 1 ) ; garden ( 4 ) ; * raisin ( 7 ) ; nmod . in ( 1 , 4 ) AND call ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A scientist on the desk admired the cake beside the chair .
actual:   scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A horse on the cake investigated the melon on a box .
actual:   horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The girl in the house liked a cake beside a bed .
actual:   * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A boy in the trailer poked the girl beside a table .
actual:   boy ( 1 ) ; * trailer ( 4 ) ; * girl ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: boy ( 1 ) ; * trailer ( 4 ) ; * girl ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The champion beside a table liked a cake on the computer .
actual:   * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A student in a pot liked the girl on a chair .
actual:   student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The frog on a mattress ate the radio on the bike .
actual:   * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The penguin in the drawer rolled the donut beside the computer .
actual:   * penguin ( 1 ) ; * drawer ( 4 ) ; * donut ( 7 ) ; * computer ( 10 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * penguin ( 1 ) ; * drawer ( 4 ) ; * donut ( 7 ) ; * computer ( 10 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The girl beside the stage found the banana in a bucket .
actual:   * girl ( 1 ) ; * stage ( 4 ) ; * banana ( 7 ) ; bucket ( 10 ) ; nmod . beside ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * girl ( 1 ) ; * stage ( 4 ) ; * banana ( 7 ) ; bucket ( 10 ) ; nmod . beside ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A bird on a train liked a cake beside a box .
actual:   bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The turkey in the storage held a cake beside a table .
actual:   * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The girl in a box liked the donut beside a stage .
actual:   * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The chicken on the table poked the child in a cup .
actual:   * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The girl on a table liked a journalist on a stage .
actual:   * girl ( 1 ) ; table ( 4 ) ; journalist ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * girl ( 1 ) ; table ( 4 ) ; journalist ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The mouse in the crate liked a professor on the road .
actual:   * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl in the car liked a bottle in the house .
actual:   girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A girl in the swamp painted the glue .
actual:   girl ( 1 ) ; * swamp ( 4 ) ; * glue ( 7 ) ; nmod . in ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: girl ( 1 ) ; * swamp ( 4 ) ; * glue ( 7 ) ; nmod . in ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The child beside a chair ate the rose beside a shoe .
actual:   * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A girl beside a stage cooked a cake in the shoe .
actual:   girl ( 1 ) ; stage ( 4 ) ; cake ( 7 ) ; * shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; stage ( 4 ) ; cake ( 7 ) ; * shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A buyer beside the table rolled the cake in the backpack .
actual:   buyer ( 1 ) ; * table ( 4 ) ; * cake ( 7 ) ; * backpack ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: buyer ( 1 ) ; * table ( 4 ) ; * cake ( 7 ) ; * backpack ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A priest on the box admired a cake on the table .
actual:   priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl on the stool on the table drew a frog .
actual:   girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 7 ) AND theme ( 8 , 10 )
expected: girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )

input: A scientist on the desk admired the cake beside the chair .
actual:   scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The frog on a mattress ate the radio on the bike .
actual:   * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The girl in the house beside a cage dusted a ball .
actual:   * girl ( 1 ) ; * house ( 4 ) ; cage ( 7 ) ; ball ( 10 ) ; nmod . in ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND dust ( 8 ) AND agent ( 8 , 7 ) AND theme ( 8 , 10 )
expected: * girl ( 1 ) ; * house ( 4 ) ; cage ( 7 ) ; ball ( 10 ) ; nmod . in ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND dust ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )

input: The mouse in the crate liked a professor on the road .
actual:   * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A bear in the car froze the key on the table .
actual:   bear ( 1 ) ; * car ( 4 ) ; * key ( 7 ) ; * table ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: bear ( 1 ) ; * car ( 4 ) ; * key ( 7 ) ; * table ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A cat on a bag cleaned a chemical in a house .
actual:   cat ( 1 ) ; bag ( 4 ) ; chemical ( 7 ) ; house ( 10 ) ; nmod . on ( 1 , 4 ) AND clean ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: cat ( 1 ) ; bag ( 4 ) ; chemical ( 7 ) ; house ( 10 ) ; nmod . on ( 1 , 4 ) AND clean ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A scientist on the desk admired the cake beside the chair .
actual:   scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A horse on the cake investigated the melon on a box .
actual:   horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The girl in the house liked a cake beside a bed .
actual:   * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A boy in the trailer poked the girl beside a table .
actual:   boy ( 1 ) ; * trailer ( 4 ) ; * girl ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: boy ( 1 ) ; * trailer ( 4 ) ; * girl ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The champion beside a table liked a cake on the computer .
actual:   * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A student in a pot liked the girl on a chair .
actual:   student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The frog on a mattress ate the radio on the bike .
actual:   * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The penguin in the drawer rolled the donut beside the computer .
actual:   * penguin ( 1 ) ; * drawer ( 4 ) ; * donut ( 7 ) ; * computer ( 10 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * penguin ( 1 ) ; * drawer ( 4 ) ; * donut ( 7 ) ; * computer ( 10 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The girl beside the stage found the banana in a bucket .
actual:   * girl ( 1 ) ; * stage ( 4 ) ; * banana ( 7 ) ; bucket ( 10 ) ; nmod . beside ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * girl ( 1 ) ; * stage ( 4 ) ; * banana ( 7 ) ; bucket ( 10 ) ; nmod . beside ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The dog on the stage ate the boy on a seat .
actual:   * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A bird on a train liked a cake beside a box .
actual:   bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A bear on the seat discovered a boy beside a stage .
actual:   bear ( 1 ) ; * seat ( 4 ) ; boy ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND discover ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: bear ( 1 ) ; * seat ( 4 ) ; boy ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND discover ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The turkey in the storage held a cake beside a table .
actual:   * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The girl in a box liked the donut beside a stage .
actual:   * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The chicken on the table poked the child in a cup .
actual:   * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A girl on the table ate the ball in a cafe .
actual:   girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The dog on a chair ate a jigsaw on the paper .
actual:   * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl beside the table saw the cat in a car .
actual:   girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A girl beside a sword ate a fruit in the house .
actual:   girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The mouse in the crate liked a professor on the road .
actual:   * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl in the car liked a bottle in the house .
actual:   girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The child beside a chair ate the rose beside a shoe .
actual:   * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A girl beside a stage cooked a cake in the shoe .
actual:   girl ( 1 ) ; stage ( 4 ) ; cake ( 7 ) ; * shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; stage ( 4 ) ; cake ( 7 ) ; * shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A buyer beside the table rolled the cake in the backpack .
actual:   buyer ( 1 ) ; * table ( 4 ) ; * cake ( 7 ) ; * backpack ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: buyer ( 1 ) ; * table ( 4 ) ; * cake ( 7 ) ; * backpack ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A priest on the box admired a cake on the table .
actual:   priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A scientist on the desk admired the cake beside the chair .
actual:   scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The turkey in the storage held a cake beside a table .
actual:   * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The mouse in the crate liked a professor on the road .
actual:   * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl in the car liked a bottle in the house .
actual:   girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A bear in the car froze the key on the table .
actual:   bear ( 1 ) ; * car ( 4 ) ; * key ( 7 ) ; * table ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: bear ( 1 ) ; * car ( 4 ) ; * key ( 7 ) ; * table ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The baby beside a valve painted the cake .
actual:   * baby ( 1 ) ; valve ( 4 ) ; * cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * baby ( 1 ) ; valve ( 4 ) ; * cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A baby in a garden called the raisin .
actual:   baby ( 1 ) ; garden ( 4 ) ; * raisin ( 7 ) ; nmod . in ( 1 , 4 ) AND call ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: baby ( 1 ) ; garden ( 4 ) ; * raisin ( 7 ) ; nmod . in ( 1 , 4 ) AND call ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A girl in the house knew a cake .
actual:   girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; nmod . in ( 1 , 4 ) AND know ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; nmod . in ( 1 , 4 ) AND know ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A cat on a bag cleaned a chemical in a house .
actual:   cat ( 1 ) ; bag ( 4 ) ; chemical ( 7 ) ; house ( 10 ) ; nmod . on ( 1 , 4 ) AND clean ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: cat ( 1 ) ; bag ( 4 ) ; chemical ( 7 ) ; house ( 10 ) ; nmod . on ( 1 , 4 ) AND clean ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A scientist on the desk admired the cake beside the chair .
actual:   scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A horse on the cake investigated the melon on a box .
actual:   horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The researcher in a room ate the baby .
actual:   * researcher ( 1 ) ; room ( 4 ) ; * baby ( 7 ) ; nmod . in ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * researcher ( 1 ) ; room ( 4 ) ; * baby ( 7 ) ; nmod . in ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The girl in the house liked a cake beside a bed .
actual:   * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A boy in the trailer poked the girl beside a table .
actual:   boy ( 1 ) ; * trailer ( 4 ) ; * girl ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: boy ( 1 ) ; * trailer ( 4 ) ; * girl ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The champion beside a table liked a cake on the computer .
actual:   * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The child on the pad ate the cat .
actual:   * child ( 1 ) ; * pad ( 4 ) ; * cat ( 7 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * child ( 1 ) ; * pad ( 4 ) ; * cat ( 7 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A student in a pot liked the girl on a chair .
actual:   student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A teacher beside the table burned the producer on the road .
actual:   teacher ( 1 ) ; * table ( 4 ) ; * producer ( 7 ) ; * road ( 10 ) ; nmod . beside ( 1 , 4 ) AND burn ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: teacher ( 1 ) ; * table ( 4 ) ; * producer ( 7 ) ; * road ( 10 ) ; nmod . beside ( 1 , 4 ) AND burn ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The frog on a mattress ate the radio on the bike .
actual:   * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The penguin in the drawer rolled the donut beside the computer .
actual:   * penguin ( 1 ) ; * drawer ( 4 ) ; * donut ( 7 ) ; * computer ( 10 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * penguin ( 1 ) ; * drawer ( 4 ) ; * donut ( 7 ) ; * computer ( 10 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The frog on a cot dusted a cookie .
actual:   * frog ( 1 ) ; cot ( 4 ) ; cookie ( 7 ) ; nmod . on ( 1 , 4 ) AND dust ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * frog ( 1 ) ; cot ( 4 ) ; cookie ( 7 ) ; nmod . on ( 1 , 4 ) AND dust ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The cat in a house adored the donut on a stage .
actual:   * cat ( 1 ) ; house ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * cat ( 1 ) ; house ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The cat in a house studied a boy .
actual:   * cat ( 1 ) ; house ( 4 ) ; boy ( 7 ) ; nmod . in ( 1 , 4 ) AND study ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * cat ( 1 ) ; house ( 4 ) ; boy ( 7 ) ; nmod . in ( 1 , 4 ) AND study ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The deer in a house hunted a melon .
actual:   * deer ( 1 ) ; house ( 4 ) ; melon ( 7 ) ; nmod . in ( 1 , 4 ) AND hunt ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * deer ( 1 ) ; house ( 4 ) ; melon ( 7 ) ; nmod . in ( 1 , 4 ) AND hunt ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The girl beside the stage found the banana in a bucket .
actual:   * girl ( 1 ) ; * stage ( 4 ) ; * banana ( 7 ) ; bucket ( 10 ) ; nmod . beside ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * girl ( 1 ) ; * stage ( 4 ) ; * banana ( 7 ) ; bucket ( 10 ) ; nmod . beside ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A boy on a plate sketched a chicken .
actual:   boy ( 1 ) ; plate ( 4 ) ; chicken ( 7 ) ; nmod . on ( 1 , 4 ) AND sketch ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: boy ( 1 ) ; plate ( 4 ) ; chicken ( 7 ) ; nmod . on ( 1 , 4 ) AND sketch ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The dog on the stage ate the boy on a seat .
actual:   * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A bird on a train liked a cake beside a box .
actual:   bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A cat in a bag found a book in the well .
actual:   cat ( 1 ) ; bag ( 4 ) ; book ( 7 ) ; * well ( 10 ) ; nmod . in ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: cat ( 1 ) ; bag ( 4 ) ; book ( 7 ) ; * well ( 10 ) ; nmod . in ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A bear on the seat discovered a boy beside a stage .
actual:   bear ( 1 ) ; * seat ( 4 ) ; boy ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND discover ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: bear ( 1 ) ; * seat ( 4 ) ; boy ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND discover ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A cat on the bed decomposed the cake in the cylinder .
actual:   cat ( 1 ) ; * bed ( 4 ) ; * cake ( 7 ) ; * cylinder ( 10 ) ; nmod . on ( 1 , 4 ) AND decompose ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: cat ( 1 ) ; * bed ( 4 ) ; * cake ( 7 ) ; * cylinder ( 10 ) ; nmod . on ( 1 , 4 ) AND decompose ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A girl beside a boat drew a soap .
actual:   girl ( 1 ) ; boat ( 4 ) ; soap ( 7 ) ; nmod . beside ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: girl ( 1 ) ; boat ( 4 ) ; soap ( 7 ) ; nmod . beside ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The turkey in the storage held a cake beside a table .
actual:   * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The girl in a box liked the donut beside a stage .
actual:   * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The chicken on the table poked the child in a cup .
actual:   * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A child beside the table rolled the student in the tin .
actual:   child ( 1 ) ; * table ( 4 ) ; * student ( 7 ) ; * tin ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: child ( 1 ) ; * table ( 4 ) ; * student ( 7 ) ; * tin ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A girl on the table ate the ball in a cafe .
actual:   girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A boy on the stage observed the donut .
actual:   boy ( 1 ) ; * stage ( 4 ) ; * donut ( 7 ) ; nmod . on ( 1 , 4 ) AND observe ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: boy ( 1 ) ; * stage ( 4 ) ; * donut ( 7 ) ; nmod . on ( 1 , 4 ) AND observe ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A boy on the stage nursed a cookie .
actual:   boy ( 1 ) ; * stage ( 4 ) ; cookie ( 7 ) ; nmod . on ( 1 , 4 ) AND nurse ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: boy ( 1 ) ; * stage ( 4 ) ; cookie ( 7 ) ; nmod . on ( 1 , 4 ) AND nurse ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The dog on a chair ate a jigsaw on the paper .
actual:   * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The cat on a bible ate the donut .
actual:   * cat ( 1 ) ; bible ( 4 ) ; * donut ( 7 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * cat ( 1 ) ; bible ( 4 ) ; * donut ( 7 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A boy on a table drew a baby .
actual:   boy ( 1 ) ; table ( 4 ) ; baby ( 7 ) ; nmod . on ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: boy ( 1 ) ; table ( 4 ) ; baby ( 7 ) ; nmod . on ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A girl beside the table saw the cat in a car .
actual:   girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A girl beside a sword ate a fruit in the house .
actual:   girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The girl on a table liked a journalist on a stage .
actual:   * girl ( 1 ) ; table ( 4 ) ; journalist ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * girl ( 1 ) ; table ( 4 ) ; journalist ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The mouse in the crate liked a professor on the road .
actual:   * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A doctor beside the stage grew a box beside the table .
actual:   doctor ( 1 ) ; * stage ( 4 ) ; box ( 7 ) ; * table ( 10 ) ; nmod . beside ( 1 , 4 ) AND grow ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: doctor ( 1 ) ; * stage ( 4 ) ; box ( 7 ) ; * table ( 10 ) ; nmod . beside ( 1 , 4 ) AND grow ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A girl in the car liked a bottle in the house .
actual:   girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The girl beside the table rolled the cake beside the tree .
actual:   * girl ( 1 ) ; * table ( 4 ) ; * cake ( 7 ) ; * tree ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; * table ( 4 ) ; * cake ( 7 ) ; * tree ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A girl beside the table packed a cake .
actual:   girl ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND pack ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: girl ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND pack ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The boy in a house froze the sailor in a can .
actual:   * boy ( 1 ) ; house ( 4 ) ; * sailor ( 7 ) ; can ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * boy ( 1 ) ; house ( 4 ) ; * sailor ( 7 ) ; can ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A child on the bed poked a brush in the car .
actual:   child ( 1 ) ; * bed ( 4 ) ; brush ( 7 ) ; * car ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: child ( 1 ) ; * bed ( 4 ) ; brush ( 7 ) ; * car ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The child beside a chair ate the rose beside a shoe .
actual:   * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The child on a table burned the pizza beside a stage .
actual:   * child ( 1 ) ; table ( 4 ) ; * pizza ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND burn ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * child ( 1 ) ; table ( 4 ) ; * pizza ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND burn ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The girl beside a bed crumpled the goose in the basin .
actual:   * girl ( 1 ) ; bed ( 4 ) ; * goose ( 7 ) ; * basin ( 10 ) ; nmod . beside ( 1 , 4 ) AND crumple ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * girl ( 1 ) ; bed ( 4 ) ; * goose ( 7 ) ; * basin ( 10 ) ; nmod . beside ( 1 , 4 ) AND crumple ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The girl on the table collapsed the rose on the trampoline .
actual:   * girl ( 1 ) ; * table ( 4 ) ; * rose ( 7 ) ; * trampoline ( 10 ) ; nmod . on ( 1 , 4 ) AND collapse ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * girl ( 1 ) ; * table ( 4 ) ; * rose ( 7 ) ; * trampoline ( 10 ) ; nmod . on ( 1 , 4 ) AND collapse ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl beside a stage cooked a cake in the shoe .
actual:   girl ( 1 ) ; stage ( 4 ) ; cake ( 7 ) ; * shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; stage ( 4 ) ; cake ( 7 ) ; * shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A buyer beside the table rolled the cake in the backpack .
actual:   buyer ( 1 ) ; * table ( 4 ) ; * cake ( 7 ) ; * backpack ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: buyer ( 1 ) ; * table ( 4 ) ; * cake ( 7 ) ; * backpack ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A priest on the box admired a cake on the table .
actual:   priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The president beside a bed painted a cake .
actual:   * president ( 1 ) ; bed ( 4 ) ; cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * president ( 1 ) ; bed ( 4 ) ; cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A bear in the car froze the key on the table .
actual:   bear ( 1 ) ; * car ( 4 ) ; * key ( 7 ) ; * table ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: bear ( 1 ) ; * car ( 4 ) ; * key ( 7 ) ; * table ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A scientist on the desk admired the cake beside the chair .
actual:   scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A horse on the cake investigated the melon on a box .
actual:   horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The girl in the house liked a cake beside a bed .
actual:   * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The champion beside a table liked a cake on the computer .
actual:   * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The child on the pad ate the cat .
actual:   * child ( 1 ) ; * pad ( 4 ) ; * cat ( 7 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * child ( 1 ) ; * pad ( 4 ) ; * cat ( 7 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The frog on a mattress ate the radio on the bike .
actual:   * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The cat in a house adored the donut on a stage .
actual:   * cat ( 1 ) ; house ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * cat ( 1 ) ; house ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The dog on the stage ate the boy on a seat .
actual:   * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A bird on a train liked a cake beside a box .
actual:   bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A dog in the house liked a cake .
actual:   dog ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: dog ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The turkey in the storage held a cake beside a table .
actual:   * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The chicken on the table poked the child in a cup .
actual:   * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A girl on the table ate the ball in a cafe .
actual:   girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The dog on a chair ate a jigsaw on the paper .
actual:   * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl beside the table saw the cat in a car .
actual:   girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A girl beside a sword ate a fruit in the house .
actual:   girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The citizen beside the duck adored the drink .
actual:   * citizen ( 1 ) ; * duck ( 4 ) ; * drink ( 7 ) ; nmod . beside ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * citizen ( 1 ) ; * duck ( 4 ) ; * drink ( 7 ) ; nmod . beside ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The mouse in the crate liked a professor on the road .
actual:   * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl in the car liked a bottle in the house .
actual:   girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A child on the bed poked a brush in the car .
actual:   child ( 1 ) ; * bed ( 4 ) ; brush ( 7 ) ; * car ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: child ( 1 ) ; * bed ( 4 ) ; brush ( 7 ) ; * car ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The child beside a chair ate the rose beside a shoe .
actual:   * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A girl on the stool on the table drew a frog .
actual:   girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 7 ) AND theme ( 8 , 10 )
expected: girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )

input: The baby beside a valve painted the cake .
actual:   * baby ( 1 ) ; valve ( 4 ) ; * cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * baby ( 1 ) ; valve ( 4 ) ; * cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A baby in a garden called the raisin .
actual:   baby ( 1 ) ; garden ( 4 ) ; * raisin ( 7 ) ; nmod . in ( 1 , 4 ) AND call ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: baby ( 1 ) ; garden ( 4 ) ; * raisin ( 7 ) ; nmod . in ( 1 , 4 ) AND call ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The professor beside a table appreciated the key in a room .
actual:   * professor ( 1 ) ; table ( 4 ) ; * key ( 7 ) ; room ( 10 ) ; nmod . beside ( 1 , 4 ) AND appreciate ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * professor ( 1 ) ; table ( 4 ) ; * key ( 7 ) ; room ( 10 ) ; nmod . beside ( 1 , 4 ) AND appreciate ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A cat on a bag cleaned a chemical in a house .
actual:   cat ( 1 ) ; bag ( 4 ) ; chemical ( 7 ) ; house ( 10 ) ; nmod . on ( 1 , 4 ) AND clean ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: cat ( 1 ) ; bag ( 4 ) ; chemical ( 7 ) ; house ( 10 ) ; nmod . on ( 1 , 4 ) AND clean ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A scientist on the desk admired the cake beside the chair .
actual:   scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A horse on the cake investigated the melon on a box .
actual:   horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The girl beside the table dusted the baby .
actual:   * girl ( 1 ) ; * table ( 4 ) ; * baby ( 7 ) ; nmod . beside ( 1 , 4 ) AND dust ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * girl ( 1 ) ; * table ( 4 ) ; * baby ( 7 ) ; nmod . beside ( 1 , 4 ) AND dust ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The girl in the house liked a cake beside a bed .
actual:   * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A boy in the trailer poked the girl beside a table .
actual:   boy ( 1 ) ; * trailer ( 4 ) ; * girl ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: boy ( 1 ) ; * trailer ( 4 ) ; * girl ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The champion beside a table liked a cake on the computer .
actual:   * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The puppy on the seat poked the boy .
actual:   * puppy ( 1 ) ; * seat ( 4 ) ; * boy ( 7 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * puppy ( 1 ) ; * seat ( 4 ) ; * boy ( 7 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The child on the pad ate the cat .
actual:   * child ( 1 ) ; * pad ( 4 ) ; * cat ( 7 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * child ( 1 ) ; * pad ( 4 ) ; * cat ( 7 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A student in a pot liked the girl on a chair .
actual:   student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A baby on the chair saw the bear .
actual:   baby ( 1 ) ; * chair ( 4 ) ; * bear ( 7 ) ; nmod . on ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: baby ( 1 ) ; * chair ( 4 ) ; * bear ( 7 ) ; nmod . on ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The frog on a mattress ate the radio on the bike .
actual:   * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The penguin in the drawer rolled the donut beside the computer .
actual:   * penguin ( 1 ) ; * drawer ( 4 ) ; * donut ( 7 ) ; * computer ( 10 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * penguin ( 1 ) ; * drawer ( 4 ) ; * donut ( 7 ) ; * computer ( 10 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The cat in a house adored the donut on a stage .
actual:   * cat ( 1 ) ; house ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * cat ( 1 ) ; house ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The girl beside the stage found the banana in a bucket .
actual:   * girl ( 1 ) ; * stage ( 4 ) ; * banana ( 7 ) ; bucket ( 10 ) ; nmod . beside ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * girl ( 1 ) ; * stage ( 4 ) ; * banana ( 7 ) ; bucket ( 10 ) ; nmod . beside ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A boy on a plate sketched a chicken .
actual:   boy ( 1 ) ; plate ( 4 ) ; chicken ( 7 ) ; nmod . on ( 1 , 4 ) AND sketch ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: boy ( 1 ) ; plate ( 4 ) ; chicken ( 7 ) ; nmod . on ( 1 , 4 ) AND sketch ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The dog on the stage ate the boy on a seat .
actual:   * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A bird on a train liked a cake beside a box .
actual:   bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A bear on the seat discovered a boy beside a stage .
actual:   bear ( 1 ) ; * seat ( 4 ) ; boy ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND discover ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: bear ( 1 ) ; * seat ( 4 ) ; boy ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND discover ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The turkey in the storage held a cake beside a table .
actual:   * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The girl in a box liked the donut beside a stage .
actual:   * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The chicken on the table poked the child in a cup .
actual:   * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A girl on the corpse in a glass admired a teacher .
actual:   girl ( 1 ) ; * corpse ( 4 ) ; glass ( 7 ) ; teacher ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND admire ( 8 ) AND agent ( 8 , 7 ) AND theme ( 8 , 10 )
expected: girl ( 1 ) ; * corpse ( 4 ) ; glass ( 7 ) ; teacher ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND admire ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )

input: A girl on the table ate the ball in a cafe .
actual:   girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The dog on a chair ate a jigsaw on the paper .
actual:   * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl beside the table saw the cat in a car .
actual:   girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A girl beside a sword ate a fruit in the house .
actual:   girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The citizen beside the duck adored the drink .
actual:   * citizen ( 1 ) ; * duck ( 4 ) ; * drink ( 7 ) ; nmod . beside ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * citizen ( 1 ) ; * duck ( 4 ) ; * drink ( 7 ) ; nmod . beside ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The girl in the house beside a cage dusted a ball .
actual:   * girl ( 1 ) ; * house ( 4 ) ; cage ( 7 ) ; ball ( 10 ) ; nmod . in ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND dust ( 8 ) AND agent ( 8 , 7 ) AND theme ( 8 , 10 )
expected: * girl ( 1 ) ; * house ( 4 ) ; cage ( 7 ) ; ball ( 10 ) ; nmod . in ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND dust ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )

input: The girl on a table liked a journalist on a stage .
actual:   * girl ( 1 ) ; table ( 4 ) ; journalist ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * girl ( 1 ) ; table ( 4 ) ; journalist ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The mouse in the crate liked a professor on the road .
actual:   * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl in the car liked a bottle in the house .
actual:   girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A girl in the swamp painted the glue .
actual:   girl ( 1 ) ; * swamp ( 4 ) ; * glue ( 7 ) ; nmod . in ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: girl ( 1 ) ; * swamp ( 4 ) ; * glue ( 7 ) ; nmod . in ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A child on the bed poked a brush in the car .
actual:   child ( 1 ) ; * bed ( 4 ) ; brush ( 7 ) ; * car ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: child ( 1 ) ; * bed ( 4 ) ; brush ( 7 ) ; * car ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The girl in the cart drew Emma .
actual:   * girl ( 1 ) ; * cart ( 4 ) ; Emma ( 6 ) ; nmod . in ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 6 )
expected: * girl ( 1 ) ; * cart ( 4 ) ; Emma ( 6 ) ; nmod . in ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 6 )

input: The child beside a chair ate the rose beside a shoe .
actual:   * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The girl beside a bed crumpled the goose in the basin .
actual:   * girl ( 1 ) ; bed ( 4 ) ; * goose ( 7 ) ; * basin ( 10 ) ; nmod . beside ( 1 , 4 ) AND crumple ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * girl ( 1 ) ; bed ( 4 ) ; * goose ( 7 ) ; * basin ( 10 ) ; nmod . beside ( 1 , 4 ) AND crumple ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The girl on the table collapsed the rose on the trampoline .
actual:   * girl ( 1 ) ; * table ( 4 ) ; * rose ( 7 ) ; * trampoline ( 10 ) ; nmod . on ( 1 , 4 ) AND collapse ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * girl ( 1 ) ; * table ( 4 ) ; * rose ( 7 ) ; * trampoline ( 10 ) ; nmod . on ( 1 , 4 ) AND collapse ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl beside a stage cooked a cake in the shoe .
actual:   girl ( 1 ) ; stage ( 4 ) ; cake ( 7 ) ; * shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; stage ( 4 ) ; cake ( 7 ) ; * shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A priest on the box admired a cake on the table .
actual:   priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

Wu et al 2023 baseline model error NOT AS predicted (7 out of 241; 2.9%)¶

In [ ]:
len(example_agent_left_theme_right_single_point_mismatch_not_nmod_substitution_all)
Out[ ]:
7
In [ ]:
for example in example_agent_left_theme_right_single_point_mismatch_not_nmod_substitution_all:
  print(example)
input: The mouse in the crate liked a professor on the road .
actual:   * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 7 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A boy in the trailer poked the girl beside a table .
actual:   boy ( 1 ) ; * trailer ( 4 ) ; * girl ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 7 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: boy ( 1 ) ; * trailer ( 4 ) ; * girl ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A student in a pot liked the girl on a chair .
actual:   student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 7 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The dog on the stage ate the boy on a seat .
actual:   * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 7 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The chicken on the table poked the child in a cup .
actual:   * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 7 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The girl on a table liked a journalist on a stage .
actual:   * girl ( 1 ) ; table ( 4 ) ; journalist ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 7 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * girl ( 1 ) ; table ( 4 ) ; journalist ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The mouse in the crate liked a professor on the road .
actual:   * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 7 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

Example sentences for agent-left, single point mismatches (theme anywhere; this covers all such errors where we have a prediction)¶

Note sentences occur more than once and can occur in both "As predicted" and "Not as predicted" conditions because this is the combined output from n=10 separate Transformers with different behavior. Each sentence does not appear 10 times because we only consider logical form output with single point errors and also if they get it correct it would not be included in the analysis.

Wu et al 2023 baseline model error AS predicted (740 out of 765; 96.7%)¶

We assert here that the input sentence had the agent left of the verb modified by a prepositional noun phrase and that the baseline model made a single part error in the Logical Form, and that the error was that the agent of the verb was replaced with the closest prepositional noun to the left of the verb.

In [ ]:
len(example_agent_left_single_point_mismatch_nmod_substitution_all)
Out[ ]:
740
In [ ]:
for example in example_agent_left_single_point_mismatch_nmod_substitution_all:
  print(example)
input: A girl on the stool on the table drew a frog .
actual:   girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 7 ) AND theme ( 8 , 10 )
expected: girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )

input: A girl in the house slept .
actual:   girl ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The professor beside a table appreciated the key in a room .
actual:   * professor ( 1 ) ; table ( 4 ) ; * key ( 7 ) ; room ( 10 ) ; nmod . beside ( 1 , 4 ) AND appreciate ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * professor ( 1 ) ; table ( 4 ) ; * key ( 7 ) ; room ( 10 ) ; nmod . beside ( 1 , 4 ) AND appreciate ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A frog beside the table cried .
actual:   frog ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: frog ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: A driver beside the bed smiled .
actual:   driver ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: driver ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A scientist on the desk admired the cake beside the chair .
actual:   scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The monster beside a road smiled .
actual:   * monster ( 1 ) ; road ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: * monster ( 1 ) ; road ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The girl in the house liked a cake beside a bed .
actual:   * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The champion beside a table liked a cake on the computer .
actual:   * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The boy in the vase sent the cake on a table to a cat .
actual:   * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
expected: * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )

input: A student in a pot liked the girl on a chair .
actual:   student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The frog on a mattress ate the radio on the bike .
actual:   * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A deer beside the house slept .
actual:   deer ( 1 ) ; * house ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: deer ( 1 ) ; * house ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A dog beside the seat screamed .
actual:   dog ( 1 ) ; * seat ( 4 ) ; nmod . beside ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 4 )
expected: dog ( 1 ) ; * seat ( 4 ) ; nmod . beside ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 1 )

input: The cat in a house adored the donut on a stage .
actual:   * cat ( 1 ) ; house ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * cat ( 1 ) ; house ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The politician beside the book cried .
actual:   * politician ( 1 ) ; * book ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: * politician ( 1 ) ; * book ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: The girl beside a table slept .
actual:   * girl ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * girl ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The horse on the stack loaned the lollipop on a table to Isaac .
actual:   * horse ( 1 ) ; * stack ( 4 ) ; * lollipop ( 7 ) ; table ( 10 ) ; Isaac ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
expected: * horse ( 1 ) ; * stack ( 4 ) ; * lollipop ( 7 ) ; table ( 10 ) ; Isaac ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )

input: The boy beside a cabinet danced .
actual:   * boy ( 1 ) ; cabinet ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 4 )
expected: * boy ( 1 ) ; cabinet ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 1 )

input: A boy in the house lended the mouse the cake beside a seat .
actual:   boy ( 1 ) ; * house ( 4 ) ; * mouse ( 7 ) ; * cake ( 9 ) ; seat ( 12 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: boy ( 1 ) ; * house ( 4 ) ; * mouse ( 7 ) ; * cake ( 9 ) ; seat ( 12 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: The frog in a house slept .
actual:   * frog ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * frog ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The dog on the stage ate the boy on a seat .
actual:   * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A bird on a train liked a cake beside a box .
actual:   bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The dog beside the table cried .
actual:   * dog ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: * dog ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: A coach on the table talked .
actual:   coach ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND talk ( 5 ) AND agent ( 5 , 4 )
expected: coach ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND talk ( 5 ) AND agent ( 5 , 1 )

input: A girl in a room sent a frog a cake beside the pillar .
actual:   girl ( 1 ) ; room ( 4 ) ; frog ( 7 ) ; cake ( 9 ) ; * pillar ( 12 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: girl ( 1 ) ; room ( 4 ) ; frog ( 7 ) ; cake ( 9 ) ; * pillar ( 12 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: The horse on a bed slept .
actual:   * horse ( 1 ) ; bed ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * horse ( 1 ) ; bed ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The prince in a bin smiled .
actual:   * prince ( 1 ) ; bin ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: * prince ( 1 ) ; bin ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A bear on the seat discovered a boy beside a stage .
actual:   bear ( 1 ) ; * seat ( 4 ) ; boy ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND discover ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: bear ( 1 ) ; * seat ( 4 ) ; boy ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND discover ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The turkey in the storage held a cake beside a table .
actual:   * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The girl in a box liked the donut beside a stage .
actual:   * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The teacher in the trap slept .
actual:   * teacher ( 1 ) ; * trap ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * teacher ( 1 ) ; * trap ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The chicken on the table poked the child in a cup .
actual:   * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The frog beside a doll slept .
actual:   * frog ( 1 ) ; doll ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * frog ( 1 ) ; doll ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A boy in the haystack slept .
actual:   boy ( 1 ) ; * haystack ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: boy ( 1 ) ; * haystack ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A dog on the stage snored .
actual:   dog ( 1 ) ; * stage ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 4 )
expected: dog ( 1 ) ; * stage ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 1 )

input: A dog in the wardrobe smiled .
actual:   dog ( 1 ) ; * wardrobe ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: dog ( 1 ) ; * wardrobe ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A girl on the table ate the ball in a cafe .
actual:   girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The girl in the taxi slept .
actual:   * girl ( 1 ) ; * taxi ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * girl ( 1 ) ; * taxi ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The dog on a chair ate a jigsaw on the paper .
actual:   * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The girl beside the road cried .
actual:   * girl ( 1 ) ; * road ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: * girl ( 1 ) ; * road ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: A professor beside the bed smiled .
actual:   professor ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: professor ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The cat on the tabletop sold the princess a cake beside a monkey .
actual:   * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: A girl beside a sword ate a fruit in the house .
actual:   girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The girl on a table liked a journalist on a stage .
actual:   * girl ( 1 ) ; table ( 4 ) ; journalist ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * girl ( 1 ) ; table ( 4 ) ; journalist ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl in the room cried .
actual:   girl ( 1 ) ; * room ( 4 ) ; nmod . in ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; * room ( 4 ) ; nmod . in ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: The girl beside the chair smiled .
actual:   * girl ( 1 ) ; * chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: * girl ( 1 ) ; * chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A girl in the car liked a bottle in the house .
actual:   girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The dog on a table snored .
actual:   * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 4 )
expected: * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 1 )

input: The boy beside the whale slept .
actual:   * boy ( 1 ) ; * whale ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * boy ( 1 ) ; * whale ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A girl on the panel drew .
actual:   girl ( 1 ) ; * panel ( 4 ) ; nmod . on ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; * panel ( 4 ) ; nmod . on ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 1 )

input: A child on the bed poked a brush in the car .
actual:   child ( 1 ) ; * bed ( 4 ) ; brush ( 7 ) ; * car ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: child ( 1 ) ; * bed ( 4 ) ; brush ( 7 ) ; * car ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The child beside a chair ate the rose beside a shoe .
actual:   * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The dog on a table scoffed .
actual:   * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 4 )
expected: * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 1 )

input: The chicken on a table rented the bean on the log to a girl .
actual:   * chicken ( 1 ) ; table ( 4 ) ; * bean ( 7 ) ; * log ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
expected: * chicken ( 1 ) ; table ( 4 ) ; * bean ( 7 ) ; * log ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )

input: The girl in the tin fed the cake beside a clock to Liam .
actual:   * girl ( 1 ) ; * tin ( 4 ) ; * cake ( 7 ) ; clock ( 10 ) ; Liam ( 12 ) ; nmod . in ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; * tin ( 4 ) ; * cake ( 7 ) ; clock ( 10 ) ; Liam ( 12 ) ; nmod . in ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )

input: The kid on a trampoline slept .
actual:   * kid ( 1 ) ; trampoline ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * kid ( 1 ) ; trampoline ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A girl on the chair slept .
actual:   girl ( 1 ) ; * chair ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; * chair ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A boy on the plate jogged .
actual:   boy ( 1 ) ; * plate ( 4 ) ; nmod . on ( 1 , 4 ) AND jog ( 5 ) AND agent ( 5 , 4 )
expected: boy ( 1 ) ; * plate ( 4 ) ; nmod . on ( 1 , 4 ) AND jog ( 5 ) AND agent ( 5 , 1 )

input: A priest on the box admired a cake on the table .
actual:   priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The child beside the chair slept .
actual:   * child ( 1 ) ; * chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * child ( 1 ) ; * chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A girl on the dog handed a cat the raisin on a table .
actual:   girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )
expected: girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )

input: A cow in the puddle slept .
actual:   cow ( 1 ) ; * puddle ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: cow ( 1 ) ; * puddle ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The boy beside a chair danced .
actual:   * boy ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 4 )
expected: * boy ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 1 )

input: The sailor in a house lended a biscuit on a table to a goose .
actual:   * sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
expected: * sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )

input: A bear in the car froze the key on the table .
actual:   bear ( 1 ) ; * car ( 4 ) ; * key ( 7 ) ; * table ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: bear ( 1 ) ; * car ( 4 ) ; * key ( 7 ) ; * table ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A baby in a garden called the raisin .
actual:   baby ( 1 ) ; garden ( 4 ) ; * raisin ( 7 ) ; nmod . in ( 1 , 4 ) AND call ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: baby ( 1 ) ; garden ( 4 ) ; * raisin ( 7 ) ; nmod . in ( 1 , 4 ) AND call ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A girl in the house knew a cake .
actual:   girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; nmod . in ( 1 , 4 ) AND know ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; nmod . in ( 1 , 4 ) AND know ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The child in a drawer gave Amelia a box beside the machine .
actual:   * child ( 1 ) ; drawer ( 4 ) ; Amelia ( 6 ) ; box ( 8 ) ; * machine ( 11 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
expected: * child ( 1 ) ; drawer ( 4 ) ; Amelia ( 6 ) ; box ( 8 ) ; * machine ( 11 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )

input: The professor beside a table appreciated the key in a room .
actual:   * professor ( 1 ) ; table ( 4 ) ; * key ( 7 ) ; room ( 10 ) ; nmod . beside ( 1 , 4 ) AND appreciate ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * professor ( 1 ) ; table ( 4 ) ; * key ( 7 ) ; room ( 10 ) ; nmod . beside ( 1 , 4 ) AND appreciate ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A cat on a bag cleaned a chemical in a house .
actual:   cat ( 1 ) ; bag ( 4 ) ; chemical ( 7 ) ; house ( 10 ) ; nmod . on ( 1 , 4 ) AND clean ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: cat ( 1 ) ; bag ( 4 ) ; chemical ( 7 ) ; house ( 10 ) ; nmod . on ( 1 , 4 ) AND clean ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A girl beside a rock passed Dylan a pen on a box .
actual:   girl ( 1 ) ; rock ( 4 ) ; Dylan ( 6 ) ; pen ( 8 ) ; box ( 11 ) ; nmod . beside ( 1 , 4 ) AND pass ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
expected: girl ( 1 ) ; rock ( 4 ) ; Dylan ( 6 ) ; pen ( 8 ) ; box ( 11 ) ; nmod . beside ( 1 , 4 ) AND pass ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )

input: A scientist on the desk admired the cake beside the chair .
actual:   scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A horse on the cake investigated the melon on a box .
actual:   horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The girl beside the table dusted the baby .
actual:   * girl ( 1 ) ; * table ( 4 ) ; * baby ( 7 ) ; nmod . beside ( 1 , 4 ) AND dust ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * girl ( 1 ) ; * table ( 4 ) ; * baby ( 7 ) ; nmod . beside ( 1 , 4 ) AND dust ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The girl in the house liked a cake beside a bed .
actual:   * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The champion beside a table liked a cake on the computer .
actual:   * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The boy in the vase sent the cake on a table to a cat .
actual:   * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
expected: * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )

input: The wolf in the house offered the donut on the dish to Sophia .
actual:   * wolf ( 1 ) ; * house ( 4 ) ; * donut ( 7 ) ; * dish ( 10 ) ; Sophia ( 12 ) ; nmod . in ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
expected: * wolf ( 1 ) ; * house ( 4 ) ; * donut ( 7 ) ; * dish ( 10 ) ; Sophia ( 12 ) ; nmod . in ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )

input: The frog on a mattress ate the radio on the bike .
actual:   * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The cat in a house adored the donut on a stage .
actual:   * cat ( 1 ) ; house ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * cat ( 1 ) ; house ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The dog on the platter beside a stage slept .
actual:   * dog ( 1 ) ; * platter ( 4 ) ; stage ( 7 ) ; nmod . on ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND sleep ( 8 ) AND agent ( 8 , 7 )
expected: * dog ( 1 ) ; * platter ( 4 ) ; stage ( 7 ) ; nmod . on ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND sleep ( 8 ) AND agent ( 8 , 1 )

input: The horse on the stack loaned the lollipop on a table to Isaac .
actual:   * horse ( 1 ) ; * stack ( 4 ) ; * lollipop ( 7 ) ; table ( 10 ) ; Isaac ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
expected: * horse ( 1 ) ; * stack ( 4 ) ; * lollipop ( 7 ) ; table ( 10 ) ; Isaac ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )

input: The cat on the table awarded a cake on the stand to Oliver .
actual:   * cat ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * stand ( 10 ) ; Oliver ( 12 ) ; nmod . on ( 1 , 4 ) AND award ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
expected: * cat ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * stand ( 10 ) ; Oliver ( 12 ) ; nmod . on ( 1 , 4 ) AND award ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )

input: The girl beside the stage found the banana in a bucket .
actual:   * girl ( 1 ) ; * stage ( 4 ) ; * banana ( 7 ) ; bucket ( 10 ) ; nmod . beside ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * girl ( 1 ) ; * stage ( 4 ) ; * banana ( 7 ) ; bucket ( 10 ) ; nmod . beside ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The fish beside the seat offered the hamburger beside a key to a frog .
actual:   * fish ( 1 ) ; * seat ( 4 ) ; * hamburger ( 7 ) ; key ( 10 ) ; frog ( 13 ) ; nmod . beside ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
expected: * fish ( 1 ) ; * seat ( 4 ) ; * hamburger ( 7 ) ; key ( 10 ) ; frog ( 13 ) ; nmod . beside ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )

input: The frog on the table gave a cake beside the bottle to James .
actual:   * frog ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * bottle ( 10 ) ; James ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
expected: * frog ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * bottle ( 10 ) ; James ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )

input: A boy in the house lended the mouse the cake beside a seat .
actual:   boy ( 1 ) ; * house ( 4 ) ; * mouse ( 7 ) ; * cake ( 9 ) ; seat ( 12 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: boy ( 1 ) ; * house ( 4 ) ; * mouse ( 7 ) ; * cake ( 9 ) ; seat ( 12 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: A bird on a train liked a cake beside a box .
actual:   bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The cat on a boat gave the box on a table to a boy .
actual:   * cat ( 1 ) ; boat ( 4 ) ; * box ( 7 ) ; table ( 10 ) ; boy ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
expected: * cat ( 1 ) ; boat ( 4 ) ; * box ( 7 ) ; table ( 10 ) ; boy ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )

input: A girl in a room sent a frog a cake beside the pillar .
actual:   girl ( 1 ) ; room ( 4 ) ; frog ( 7 ) ; cake ( 9 ) ; * pillar ( 12 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: girl ( 1 ) ; room ( 4 ) ; frog ( 7 ) ; cake ( 9 ) ; * pillar ( 12 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: The girl on a tree offered the boy the banana beside a table .
actual:   * girl ( 1 ) ; tree ( 4 ) ; * boy ( 7 ) ; * banana ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: * girl ( 1 ) ; tree ( 4 ) ; * boy ( 7 ) ; * banana ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: The girl beside a stage lended the cake in the house to Liam .
actual:   * girl ( 1 ) ; stage ( 4 ) ; * cake ( 7 ) ; * house ( 10 ) ; Liam ( 12 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . in ( 7 , 10 )
expected: * girl ( 1 ) ; stage ( 4 ) ; * cake ( 7 ) ; * house ( 10 ) ; Liam ( 12 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . in ( 7 , 10 )

input: The girl beside the tree in the bookstore slept .
actual:   * girl ( 1 ) ; * tree ( 4 ) ; * bookstore ( 7 ) ; nmod . beside ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND sleep ( 8 ) AND agent ( 8 , 7 )
expected: * girl ( 1 ) ; * tree ( 4 ) ; * bookstore ( 7 ) ; nmod . beside ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND sleep ( 8 ) AND agent ( 8 , 1 )

input: A cat on the bed decomposed the cake in the cylinder .
actual:   cat ( 1 ) ; * bed ( 4 ) ; * cake ( 7 ) ; * cylinder ( 10 ) ; nmod . on ( 1 , 4 ) AND decompose ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: cat ( 1 ) ; * bed ( 4 ) ; * cake ( 7 ) ; * cylinder ( 10 ) ; nmod . on ( 1 , 4 ) AND decompose ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The turkey in the storage held a cake beside a table .
actual:   * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The girl in a box liked the donut beside a stage .
actual:   * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A girl beside the table gave a mouse a mirror in the saucepan .
actual:   girl ( 1 ) ; * table ( 4 ) ; mouse ( 7 ) ; mirror ( 9 ) ; * saucepan ( 12 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . in ( 9 , 12 )
expected: girl ( 1 ) ; * table ( 4 ) ; mouse ( 7 ) ; mirror ( 9 ) ; * saucepan ( 12 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . in ( 9 , 12 )

input: A girl on the table ate the ball in a cafe .
actual:   girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A child on a table gave Scarlett a balloon beside a lemon .
actual:   child ( 1 ) ; table ( 4 ) ; Scarlett ( 6 ) ; balloon ( 8 ) ; lemon ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
expected: child ( 1 ) ; table ( 4 ) ; Scarlett ( 6 ) ; balloon ( 8 ) ; lemon ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )

input: The dog on a chair ate a jigsaw on the paper .
actual:   * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The cat on the canvas gave the glue beside a table to a girl .
actual:   * cat ( 1 ) ; * canvas ( 4 ) ; * glue ( 7 ) ; table ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
expected: * cat ( 1 ) ; * canvas ( 4 ) ; * glue ( 7 ) ; table ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )

input: A girl beside the table saw the cat in a car .
actual:   girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The teacher in a house awarded a cookie beside a seat to the bee .
actual:   * teacher ( 1 ) ; house ( 4 ) ; cookie ( 7 ) ; seat ( 10 ) ; * bee ( 13 ) ; nmod . in ( 1 , 4 ) AND award ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
expected: * teacher ( 1 ) ; house ( 4 ) ; cookie ( 7 ) ; seat ( 10 ) ; * bee ( 13 ) ; nmod . in ( 1 , 4 ) AND award ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )

input: The cat on the tabletop sold the princess a cake beside a monkey .
actual:   * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: A girl beside a sword ate a fruit in the house .
actual:   girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The citizen beside the duck adored the drink .
actual:   * citizen ( 1 ) ; * duck ( 4 ) ; * drink ( 7 ) ; nmod . beside ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * citizen ( 1 ) ; * duck ( 4 ) ; * drink ( 7 ) ; nmod . beside ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The girl in the house beside a cage dusted a ball .
actual:   * girl ( 1 ) ; * house ( 4 ) ; cage ( 7 ) ; ball ( 10 ) ; nmod . in ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND dust ( 8 ) AND agent ( 8 , 7 ) AND theme ( 8 , 10 )
expected: * girl ( 1 ) ; * house ( 4 ) ; cage ( 7 ) ; ball ( 10 ) ; nmod . in ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND dust ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )

input: The boy beside a bed gave Audrey a cake on the pedestal .
actual:   * boy ( 1 ) ; bed ( 4 ) ; Audrey ( 6 ) ; cake ( 8 ) ; * pedestal ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
expected: * boy ( 1 ) ; bed ( 4 ) ; Audrey ( 6 ) ; cake ( 8 ) ; * pedestal ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )

input: A creature in a house beside the book slept .
actual:   creature ( 1 ) ; house ( 4 ) ; * book ( 7 ) ; nmod . in ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND sleep ( 8 ) AND agent ( 8 , 7 )
expected: creature ( 1 ) ; house ( 4 ) ; * book ( 7 ) ; nmod . in ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND sleep ( 8 ) AND agent ( 8 , 1 )

input: A girl in the car liked a bottle in the house .
actual:   girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A girl in a house sold the cake beside the stage to Emma .
actual:   girl ( 1 ) ; house ( 4 ) ; * cake ( 7 ) ; * stage ( 10 ) ; Emma ( 12 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
expected: girl ( 1 ) ; house ( 4 ) ; * cake ( 7 ) ; * stage ( 10 ) ; Emma ( 12 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )

input: The resident on a computer gave a cake beside a helicopter to the girl .
actual:   * resident ( 1 ) ; computer ( 4 ) ; cake ( 7 ) ; helicopter ( 10 ) ; * girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
expected: * resident ( 1 ) ; computer ( 4 ) ; cake ( 7 ) ; helicopter ( 10 ) ; * girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )

input: A girl in the house gave the host a bat beside the pepper .
actual:   girl ( 1 ) ; * house ( 4 ) ; * host ( 7 ) ; bat ( 9 ) ; * pepper ( 12 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: girl ( 1 ) ; * house ( 4 ) ; * host ( 7 ) ; bat ( 9 ) ; * pepper ( 12 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: A girl in a container gave the brush in the cart to a duke .
actual:   girl ( 1 ) ; container ( 4 ) ; * brush ( 7 ) ; * cart ( 10 ) ; duke ( 13 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; container ( 4 ) ; * brush ( 7 ) ; * cart ( 10 ) ; duke ( 13 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )

input: The girl beside a table rented Camila the cake beside the bed .
actual:   * girl ( 1 ) ; table ( 4 ) ; Camila ( 6 ) ; * cake ( 8 ) ; * bed ( 11 ) ; nmod . beside ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
expected: * girl ( 1 ) ; table ( 4 ) ; Camila ( 6 ) ; * cake ( 8 ) ; * bed ( 11 ) ; nmod . beside ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )

input: A child on the bed poked a brush in the car .
actual:   child ( 1 ) ; * bed ( 4 ) ; brush ( 7 ) ; * car ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: child ( 1 ) ; * bed ( 4 ) ; brush ( 7 ) ; * car ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The child beside a chair ate the rose beside a shoe .
actual:   * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The chicken on a table rented the bean on the log to a girl .
actual:   * chicken ( 1 ) ; table ( 4 ) ; * bean ( 7 ) ; * log ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
expected: * chicken ( 1 ) ; table ( 4 ) ; * bean ( 7 ) ; * log ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )

input: A pony on a crack fed the guitar beside a broker to the sailor .
actual:   pony ( 1 ) ; crack ( 4 ) ; * guitar ( 7 ) ; broker ( 10 ) ; * sailor ( 13 ) ; nmod . on ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
expected: pony ( 1 ) ; crack ( 4 ) ; * guitar ( 7 ) ; broker ( 10 ) ; * sailor ( 13 ) ; nmod . on ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )

input: The girl in the tin fed the cake beside a clock to Liam .
actual:   * girl ( 1 ) ; * tin ( 4 ) ; * cake ( 7 ) ; clock ( 10 ) ; Liam ( 12 ) ; nmod . in ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; * tin ( 4 ) ; * cake ( 7 ) ; clock ( 10 ) ; Liam ( 12 ) ; nmod . in ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )

input: A baby in the car offered a cake on a bible to Charlotte .
actual:   baby ( 1 ) ; * car ( 4 ) ; cake ( 7 ) ; bible ( 10 ) ; Charlotte ( 12 ) ; nmod . in ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
expected: baby ( 1 ) ; * car ( 4 ) ; cake ( 7 ) ; bible ( 10 ) ; Charlotte ( 12 ) ; nmod . in ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )

input: A girl beside a stage cooked a cake in the shoe .
actual:   girl ( 1 ) ; stage ( 4 ) ; cake ( 7 ) ; * shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; stage ( 4 ) ; cake ( 7 ) ; * shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The mouse on a table gave the donut in the nest to a cat .
actual:   * mouse ( 1 ) ; table ( 4 ) ; * donut ( 7 ) ; * nest ( 10 ) ; cat ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )
expected: * mouse ( 1 ) ; table ( 4 ) ; * donut ( 7 ) ; * nest ( 10 ) ; cat ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )

input: A boy beside a broker lended Emma the melon on the plate .
actual:   boy ( 1 ) ; broker ( 4 ) ; Emma ( 6 ) ; * melon ( 8 ) ; * plate ( 11 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
expected: boy ( 1 ) ; broker ( 4 ) ; Emma ( 6 ) ; * melon ( 8 ) ; * plate ( 11 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )

input: A buyer beside the table rolled the cake in the backpack .
actual:   buyer ( 1 ) ; * table ( 4 ) ; * cake ( 7 ) ; * backpack ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: buyer ( 1 ) ; * table ( 4 ) ; * cake ( 7 ) ; * backpack ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A fish on a leaflet loaned the cat the donut beside the stage .
actual:   fish ( 1 ) ; leaflet ( 4 ) ; * cat ( 7 ) ; * donut ( 9 ) ; * stage ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: fish ( 1 ) ; leaflet ( 4 ) ; * cat ( 7 ) ; * donut ( 9 ) ; * stage ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: A priest on the box admired a cake on the table .
actual:   priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl on the dog handed a cat the raisin on a table .
actual:   girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )
expected: girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )

input: The boy on a towel gave the frog the cake on a booklet .
actual:   * boy ( 1 ) ; towel ( 4 ) ; * frog ( 7 ) ; * cake ( 9 ) ; booklet ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )
expected: * boy ( 1 ) ; towel ( 4 ) ; * frog ( 7 ) ; * cake ( 9 ) ; booklet ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )

input: The cat beside the stool gave a cake in a cup to a customer .
actual:   * cat ( 1 ) ; * stool ( 4 ) ; cake ( 7 ) ; cup ( 10 ) ; customer ( 13 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )
expected: * cat ( 1 ) ; * stool ( 4 ) ; cake ( 7 ) ; cup ( 10 ) ; customer ( 13 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )

input: The baby on a tray in the house screamed .
actual:   * baby ( 1 ) ; tray ( 4 ) ; * house ( 7 ) ; nmod . on ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND scream ( 8 ) AND agent ( 8 , 7 )
expected: * baby ( 1 ) ; tray ( 4 ) ; * house ( 7 ) ; nmod . on ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND scream ( 8 ) AND agent ( 8 , 1 )

input: A baby on a truck slept .
actual:   baby ( 1 ) ; truck ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: baby ( 1 ) ; truck ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A girl in a hole slept .
actual:   girl ( 1 ) ; hole ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; hole ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The monster beside a road smiled .
actual:   * monster ( 1 ) ; road ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: * monster ( 1 ) ; road ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The boy in the vase sent the cake on a table to a cat .
actual:   * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
expected: * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )

input: A cat on a sofa slept .
actual:   cat ( 1 ) ; sofa ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: cat ( 1 ) ; sofa ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The dog on the platter beside a stage slept .
actual:   * dog ( 1 ) ; * platter ( 4 ) ; stage ( 7 ) ; nmod . on ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND sleep ( 8 ) AND agent ( 8 , 7 )
expected: * dog ( 1 ) ; * platter ( 4 ) ; stage ( 7 ) ; nmod . on ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND sleep ( 8 ) AND agent ( 8 , 1 )

input: A girl on a rock smiled .
actual:   girl ( 1 ) ; rock ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; rock ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A bird on a train liked a cake beside a box .
actual:   bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The prince in a bin smiled .
actual:   * prince ( 1 ) ; bin ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: * prince ( 1 ) ; bin ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The girl beside the tree in the bookstore slept .
actual:   * girl ( 1 ) ; * tree ( 4 ) ; * bookstore ( 7 ) ; nmod . beside ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND sleep ( 8 ) AND agent ( 8 , 7 )
expected: * girl ( 1 ) ; * tree ( 4 ) ; * bookstore ( 7 ) ; nmod . beside ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND sleep ( 8 ) AND agent ( 8 , 1 )

input: A child in a room on a stage smiled .
actual:   child ( 1 ) ; room ( 4 ) ; stage ( 7 ) ; nmod . in ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND smile ( 8 ) AND agent ( 8 , 7 )
expected: child ( 1 ) ; room ( 4 ) ; stage ( 7 ) ; nmod . in ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND smile ( 8 ) AND agent ( 8 , 1 )

input: A dog in the wardrobe smiled .
actual:   dog ( 1 ) ; * wardrobe ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: dog ( 1 ) ; * wardrobe ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A girl on a table smiled .
actual:   girl ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The mouse in the crate liked a professor on the road .
actual:   * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A boy beside a chair laughed .
actual:   boy ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 4 )
expected: boy ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 1 )

input: A child in a car smiled .
actual:   child ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: child ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A girl on the dog handed a cat the raisin on a table .
actual:   girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )
expected: girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )

input: A host beside a table smiled .
actual:   host ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: host ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A girl in the house slept .
actual:   girl ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The baby beside a valve painted the cake .
actual:   * baby ( 1 ) ; valve ( 4 ) ; * cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * baby ( 1 ) ; valve ( 4 ) ; * cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A baby in a garden called the raisin .
actual:   baby ( 1 ) ; garden ( 4 ) ; * raisin ( 7 ) ; nmod . in ( 1 , 4 ) AND call ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: baby ( 1 ) ; garden ( 4 ) ; * raisin ( 7 ) ; nmod . in ( 1 , 4 ) AND call ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The child in a drawer gave Amelia a box beside the machine .
actual:   * child ( 1 ) ; drawer ( 4 ) ; Amelia ( 6 ) ; box ( 8 ) ; * machine ( 11 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
expected: * child ( 1 ) ; drawer ( 4 ) ; Amelia ( 6 ) ; box ( 8 ) ; * machine ( 11 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )

input: A frog beside the table cried .
actual:   frog ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: frog ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: A driver beside the bed smiled .
actual:   driver ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: driver ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A scientist on the desk admired the cake beside the chair .
actual:   scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The lamb beside a toy ran .
actual:   * lamb ( 1 ) ; toy ( 4 ) ; nmod . beside ( 1 , 4 ) AND run ( 5 ) AND agent ( 5 , 4 )
expected: * lamb ( 1 ) ; toy ( 4 ) ; nmod . beside ( 1 , 4 ) AND run ( 5 ) AND agent ( 5 , 1 )

input: A horse on the cake investigated the melon on a box .
actual:   horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The monster beside a road smiled .
actual:   * monster ( 1 ) ; road ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: * monster ( 1 ) ; road ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The girl in the house liked a cake beside a bed .
actual:   * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A boy in the trailer poked the girl beside a table .
actual:   boy ( 1 ) ; * trailer ( 4 ) ; * girl ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: boy ( 1 ) ; * trailer ( 4 ) ; * girl ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The champion beside a table liked a cake on the computer .
actual:   * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The boy in the vase sent the cake on a table to a cat .
actual:   * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
expected: * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )

input: A student in a pot liked the girl on a chair .
actual:   student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The frog on a mattress ate the radio on the bike .
actual:   * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The penguin in the drawer rolled the donut beside the computer .
actual:   * penguin ( 1 ) ; * drawer ( 4 ) ; * donut ( 7 ) ; * computer ( 10 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * penguin ( 1 ) ; * drawer ( 4 ) ; * donut ( 7 ) ; * computer ( 10 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A deer beside the house slept .
actual:   deer ( 1 ) ; * house ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: deer ( 1 ) ; * house ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A dog beside the seat screamed .
actual:   dog ( 1 ) ; * seat ( 4 ) ; nmod . beside ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 4 )
expected: dog ( 1 ) ; * seat ( 4 ) ; nmod . beside ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 1 )

input: The politician beside the book cried .
actual:   * politician ( 1 ) ; * book ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: * politician ( 1 ) ; * book ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: A girl on the surface screamed .
actual:   girl ( 1 ) ; * surface ( 4 ) ; nmod . on ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; * surface ( 4 ) ; nmod . on ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 1 )

input: The girl beside a table slept .
actual:   * girl ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * girl ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The boy beside a cabinet danced .
actual:   * boy ( 1 ) ; cabinet ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 4 )
expected: * boy ( 1 ) ; cabinet ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 1 )

input: The girl beside the stage found the banana in a bucket .
actual:   * girl ( 1 ) ; * stage ( 4 ) ; * banana ( 7 ) ; bucket ( 10 ) ; nmod . beside ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * girl ( 1 ) ; * stage ( 4 ) ; * banana ( 7 ) ; bucket ( 10 ) ; nmod . beside ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The frog on the table gave a cake beside the bottle to James .
actual:   * frog ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * bottle ( 10 ) ; James ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
expected: * frog ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * bottle ( 10 ) ; James ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )

input: A boy in the house lended the mouse the cake beside a seat .
actual:   boy ( 1 ) ; * house ( 4 ) ; * mouse ( 7 ) ; * cake ( 9 ) ; seat ( 12 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: boy ( 1 ) ; * house ( 4 ) ; * mouse ( 7 ) ; * cake ( 9 ) ; seat ( 12 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: The frog in a house slept .
actual:   * frog ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * frog ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A bird on a train liked a cake beside a box .
actual:   bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The girl on a booklet walked .
actual:   * girl ( 1 ) ; booklet ( 4 ) ; nmod . on ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 4 )
expected: * girl ( 1 ) ; booklet ( 4 ) ; nmod . on ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 1 )

input: The cat on a boat gave the box on a table to a boy .
actual:   * cat ( 1 ) ; boat ( 4 ) ; * box ( 7 ) ; table ( 10 ) ; boy ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
expected: * cat ( 1 ) ; boat ( 4 ) ; * box ( 7 ) ; table ( 10 ) ; boy ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )

input: A coach on the table talked .
actual:   coach ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND talk ( 5 ) AND agent ( 5 , 4 )
expected: coach ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND talk ( 5 ) AND agent ( 5 , 1 )

input: A girl in a room sent a frog a cake beside the pillar .
actual:   girl ( 1 ) ; room ( 4 ) ; frog ( 7 ) ; cake ( 9 ) ; * pillar ( 12 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: girl ( 1 ) ; room ( 4 ) ; frog ( 7 ) ; cake ( 9 ) ; * pillar ( 12 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: The horse on a bed slept .
actual:   * horse ( 1 ) ; bed ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * horse ( 1 ) ; bed ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The prince in a bin smiled .
actual:   * prince ( 1 ) ; bin ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: * prince ( 1 ) ; bin ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The girl beside a stage lended the cake in the house to Liam .
actual:   * girl ( 1 ) ; stage ( 4 ) ; * cake ( 7 ) ; * house ( 10 ) ; Liam ( 12 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . in ( 7 , 10 )
expected: * girl ( 1 ) ; stage ( 4 ) ; * cake ( 7 ) ; * house ( 10 ) ; Liam ( 12 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . in ( 7 , 10 )

input: The turkey in the storage held a cake beside a table .
actual:   * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The girl in a box liked the donut beside a stage .
actual:   * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The teacher in the trap slept .
actual:   * teacher ( 1 ) ; * trap ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * teacher ( 1 ) ; * trap ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The chicken on the table poked the child in a cup .
actual:   * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The frog beside a doll slept .
actual:   * frog ( 1 ) ; doll ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * frog ( 1 ) ; doll ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A girl beside the table gave a mouse a mirror in the saucepan .
actual:   girl ( 1 ) ; * table ( 4 ) ; mouse ( 7 ) ; mirror ( 9 ) ; * saucepan ( 12 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . in ( 9 , 12 )
expected: girl ( 1 ) ; * table ( 4 ) ; mouse ( 7 ) ; mirror ( 9 ) ; * saucepan ( 12 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . in ( 9 , 12 )

input: A boy in the haystack slept .
actual:   boy ( 1 ) ; * haystack ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: boy ( 1 ) ; * haystack ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A dog on the stage snored .
actual:   dog ( 1 ) ; * stage ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 4 )
expected: dog ( 1 ) ; * stage ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 1 )

input: A dog in the wardrobe smiled .
actual:   dog ( 1 ) ; * wardrobe ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: dog ( 1 ) ; * wardrobe ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The girl beside the road cried .
actual:   * girl ( 1 ) ; * road ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: * girl ( 1 ) ; * road ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: A professor beside the bed smiled .
actual:   professor ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: professor ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The cat on the canvas gave the glue beside a table to a girl .
actual:   * cat ( 1 ) ; * canvas ( 4 ) ; * glue ( 7 ) ; table ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
expected: * cat ( 1 ) ; * canvas ( 4 ) ; * glue ( 7 ) ; table ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )

input: The cat on the tabletop sold the princess a cake beside a monkey .
actual:   * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: A girl on the rock walked .
actual:   girl ( 1 ) ; * rock ( 4 ) ; nmod . on ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; * rock ( 4 ) ; nmod . on ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 1 )

input: The boy beside a bed gave Audrey a cake on the pedestal .
actual:   * boy ( 1 ) ; bed ( 4 ) ; Audrey ( 6 ) ; cake ( 8 ) ; * pedestal ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
expected: * boy ( 1 ) ; bed ( 4 ) ; Audrey ( 6 ) ; cake ( 8 ) ; * pedestal ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )

input: The girl on a table liked a journalist on a stage .
actual:   * girl ( 1 ) ; table ( 4 ) ; journalist ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * girl ( 1 ) ; table ( 4 ) ; journalist ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl in the room cried .
actual:   girl ( 1 ) ; * room ( 4 ) ; nmod . in ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; * room ( 4 ) ; nmod . in ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: The mouse in the crate liked a professor on the road .
actual:   * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The girl beside the chair smiled .
actual:   * girl ( 1 ) ; * chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: * girl ( 1 ) ; * chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The girl in a house scoffed .
actual:   * girl ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 4 )
expected: * girl ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 1 )

input: A girl in the car liked a bottle in the house .
actual:   girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A girl in a house sold the cake beside the stage to Emma .
actual:   girl ( 1 ) ; house ( 4 ) ; * cake ( 7 ) ; * stage ( 10 ) ; Emma ( 12 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
expected: girl ( 1 ) ; house ( 4 ) ; * cake ( 7 ) ; * stage ( 10 ) ; Emma ( 12 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )

input: The resident on a computer gave a cake beside a helicopter to the girl .
actual:   * resident ( 1 ) ; computer ( 4 ) ; cake ( 7 ) ; helicopter ( 10 ) ; * girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
expected: * resident ( 1 ) ; computer ( 4 ) ; cake ( 7 ) ; helicopter ( 10 ) ; * girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )

input: A girl in the swamp painted the glue .
actual:   girl ( 1 ) ; * swamp ( 4 ) ; * glue ( 7 ) ; nmod . in ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: girl ( 1 ) ; * swamp ( 4 ) ; * glue ( 7 ) ; nmod . in ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A girl in the house gave the host a bat beside the pepper .
actual:   girl ( 1 ) ; * house ( 4 ) ; * host ( 7 ) ; bat ( 9 ) ; * pepper ( 12 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: girl ( 1 ) ; * house ( 4 ) ; * host ( 7 ) ; bat ( 9 ) ; * pepper ( 12 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: A girl in a container gave the brush in the cart to a duke .
actual:   girl ( 1 ) ; container ( 4 ) ; * brush ( 7 ) ; * cart ( 10 ) ; duke ( 13 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; container ( 4 ) ; * brush ( 7 ) ; * cart ( 10 ) ; duke ( 13 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )

input: The dog on a table snored .
actual:   * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 4 )
expected: * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 1 )

input: A girl on the surface cried .
actual:   girl ( 1 ) ; * surface ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; * surface ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: The boy beside the whale slept .
actual:   * boy ( 1 ) ; * whale ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * boy ( 1 ) ; * whale ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The child beside a chair ate the rose beside a shoe .
actual:   * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The dog on a table scoffed .
actual:   * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 4 )
expected: * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 1 )

input: A teacher on the table cried .
actual:   teacher ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: teacher ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: The girl in the tin fed the cake beside a clock to Liam .
actual:   * girl ( 1 ) ; * tin ( 4 ) ; * cake ( 7 ) ; clock ( 10 ) ; Liam ( 12 ) ; nmod . in ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; * tin ( 4 ) ; * cake ( 7 ) ; clock ( 10 ) ; Liam ( 12 ) ; nmod . in ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )

input: The kid on a trampoline slept .
actual:   * kid ( 1 ) ; trampoline ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * kid ( 1 ) ; trampoline ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A baby in the car offered a cake on a bible to Charlotte .
actual:   baby ( 1 ) ; * car ( 4 ) ; cake ( 7 ) ; bible ( 10 ) ; Charlotte ( 12 ) ; nmod . in ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
expected: baby ( 1 ) ; * car ( 4 ) ; cake ( 7 ) ; bible ( 10 ) ; Charlotte ( 12 ) ; nmod . in ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )

input: A girl beside a stage cooked a cake in the shoe .
actual:   girl ( 1 ) ; stage ( 4 ) ; cake ( 7 ) ; * shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; stage ( 4 ) ; cake ( 7 ) ; * shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A girl on the chair slept .
actual:   girl ( 1 ) ; * chair ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; * chair ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A boy on the plate jogged .
actual:   boy ( 1 ) ; * plate ( 4 ) ; nmod . on ( 1 , 4 ) AND jog ( 5 ) AND agent ( 5 , 4 )
expected: boy ( 1 ) ; * plate ( 4 ) ; nmod . on ( 1 , 4 ) AND jog ( 5 ) AND agent ( 5 , 1 )

input: A buyer beside the table rolled the cake in the backpack .
actual:   buyer ( 1 ) ; * table ( 4 ) ; * cake ( 7 ) ; * backpack ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: buyer ( 1 ) ; * table ( 4 ) ; * cake ( 7 ) ; * backpack ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A priest on the box admired a cake on the table .
actual:   priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The child beside the chair slept .
actual:   * child ( 1 ) ; * chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * child ( 1 ) ; * chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A chicken in a car served a cat a box in the bun .
actual:   chicken ( 1 ) ; car ( 4 ) ; cat ( 7 ) ; box ( 9 ) ; * bun ( 12 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . in ( 9 , 12 )
expected: chicken ( 1 ) ; car ( 4 ) ; cat ( 7 ) ; box ( 9 ) ; * bun ( 12 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . in ( 9 , 12 )

input: A boy beside the chair sneezed .
actual:   boy ( 1 ) ; * chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND sneeze ( 5 ) AND agent ( 5 , 4 )
expected: boy ( 1 ) ; * chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND sneeze ( 5 ) AND agent ( 5 , 1 )

input: A girl on the dog handed a cat the raisin on a table .
actual:   girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )
expected: girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )

input: The boy on a towel gave the frog the cake on a booklet .
actual:   * boy ( 1 ) ; towel ( 4 ) ; * frog ( 7 ) ; * cake ( 9 ) ; booklet ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )
expected: * boy ( 1 ) ; towel ( 4 ) ; * frog ( 7 ) ; * cake ( 9 ) ; booklet ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )

input: The cat beside the stool gave a cake in a cup to a customer .
actual:   * cat ( 1 ) ; * stool ( 4 ) ; cake ( 7 ) ; cup ( 10 ) ; customer ( 13 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )
expected: * cat ( 1 ) ; * stool ( 4 ) ; cake ( 7 ) ; cup ( 10 ) ; customer ( 13 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )

input: A cow in the puddle slept .
actual:   cow ( 1 ) ; * puddle ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: cow ( 1 ) ; * puddle ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The boy beside a chair danced .
actual:   * boy ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 4 )
expected: * boy ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 1 )

input: The baby on a tray in the house screamed .
actual:   * baby ( 1 ) ; tray ( 4 ) ; * house ( 7 ) ; nmod . on ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND scream ( 8 ) AND agent ( 8 , 7 )
expected: * baby ( 1 ) ; tray ( 4 ) ; * house ( 7 ) ; nmod . on ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND scream ( 8 ) AND agent ( 8 , 1 )

input: A girl on the stool on the table drew a frog .
actual:   girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 7 ) AND theme ( 8 , 10 )
expected: girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )

input: A goose in a spaceship gasped .
actual:   goose ( 1 ) ; spaceship ( 4 ) ; nmod . in ( 1 , 4 ) AND gasp ( 5 ) AND agent ( 5 , 4 )
expected: goose ( 1 ) ; spaceship ( 4 ) ; nmod . in ( 1 , 4 ) AND gasp ( 5 ) AND agent ( 5 , 1 )

input: A baby on a truck slept .
actual:   baby ( 1 ) ; truck ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: baby ( 1 ) ; truck ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A girl in a hole slept .
actual:   girl ( 1 ) ; hole ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; hole ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A scientist on the desk admired the cake beside the chair .
actual:   scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The monster beside a road smiled .
actual:   * monster ( 1 ) ; road ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: * monster ( 1 ) ; road ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A student on a tree sneezed .
actual:   student ( 1 ) ; tree ( 4 ) ; nmod . on ( 1 , 4 ) AND sneeze ( 5 ) AND agent ( 5 , 4 )
expected: student ( 1 ) ; tree ( 4 ) ; nmod . on ( 1 , 4 ) AND sneeze ( 5 ) AND agent ( 5 , 1 )

input: The boy in the vase sent the cake on a table to a cat .
actual:   * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
expected: * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )

input: A cat on a sofa slept .
actual:   cat ( 1 ) ; sofa ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: cat ( 1 ) ; sofa ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The frog on a mattress ate the radio on the bike .
actual:   * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The horse on the stack loaned the lollipop on a table to Isaac .
actual:   * horse ( 1 ) ; * stack ( 4 ) ; * lollipop ( 7 ) ; table ( 10 ) ; Isaac ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
expected: * horse ( 1 ) ; * stack ( 4 ) ; * lollipop ( 7 ) ; table ( 10 ) ; Isaac ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )

input: A girl on a rock smiled .
actual:   girl ( 1 ) ; rock ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; rock ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A boy in the house lended the mouse the cake beside a seat .
actual:   boy ( 1 ) ; * house ( 4 ) ; * mouse ( 7 ) ; * cake ( 9 ) ; seat ( 12 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: boy ( 1 ) ; * house ( 4 ) ; * mouse ( 7 ) ; * cake ( 9 ) ; seat ( 12 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: The director on a bed on the machine lended a farmer the sandwich .
actual:   * director ( 1 ) ; bed ( 4 ) ; * machine ( 7 ) ; farmer ( 10 ) ; * sandwich ( 12 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND lend ( 8 ) AND agent ( 8 , 7 ) AND recipient ( 8 , 10 ) AND theme ( 8 , 12 )
expected: * director ( 1 ) ; bed ( 4 ) ; * machine ( 7 ) ; farmer ( 10 ) ; * sandwich ( 12 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND lend ( 8 ) AND agent ( 8 , 1 ) AND recipient ( 8 , 10 ) AND theme ( 8 , 12 )

input: The prince in a bin smiled .
actual:   * prince ( 1 ) ; bin ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: * prince ( 1 ) ; bin ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A girl on a table smiled .
actual:   girl ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A tiger on a bible slept .
actual:   tiger ( 1 ) ; bible ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: tiger ( 1 ) ; bible ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The cat on the canvas gave the glue beside a table to a girl .
actual:   * cat ( 1 ) ; * canvas ( 4 ) ; * glue ( 7 ) ; table ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
expected: * cat ( 1 ) ; * canvas ( 4 ) ; * glue ( 7 ) ; table ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )

input: The cat on the tabletop sold the princess a cake beside a monkey .
actual:   * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: The girl in the house beside a cage dusted a ball .
actual:   * girl ( 1 ) ; * house ( 4 ) ; cage ( 7 ) ; ball ( 10 ) ; nmod . in ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND dust ( 8 ) AND agent ( 8 , 7 ) AND theme ( 8 , 10 )
expected: * girl ( 1 ) ; * house ( 4 ) ; cage ( 7 ) ; ball ( 10 ) ; nmod . in ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND dust ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )

input: The mouse in the crate liked a professor on the road .
actual:   * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl in a container gave the brush in the cart to a duke .
actual:   girl ( 1 ) ; container ( 4 ) ; * brush ( 7 ) ; * cart ( 10 ) ; duke ( 13 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; container ( 4 ) ; * brush ( 7 ) ; * cart ( 10 ) ; duke ( 13 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )

input: A boy beside a chair laughed .
actual:   boy ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 4 )
expected: boy ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 1 )

input: A teacher beside a table danced .
actual:   teacher ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 4 )
expected: teacher ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 1 )

input: The girl in the tin fed the cake beside a clock to Liam .
actual:   * girl ( 1 ) ; * tin ( 4 ) ; * cake ( 7 ) ; clock ( 10 ) ; Liam ( 12 ) ; nmod . in ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; * tin ( 4 ) ; * cake ( 7 ) ; clock ( 10 ) ; Liam ( 12 ) ; nmod . in ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )

input: A child in a car smiled .
actual:   child ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: child ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The girl in the tub lended Emma the cake .
actual:   * girl ( 1 ) ; * tub ( 4 ) ; Emma ( 6 ) ; * cake ( 8 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )
expected: * girl ( 1 ) ; * tub ( 4 ) ; Emma ( 6 ) ; * cake ( 8 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )

input: A queen on a device screamed .
actual:   queen ( 1 ) ; device ( 4 ) ; nmod . on ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 4 )
expected: queen ( 1 ) ; device ( 4 ) ; nmod . on ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 1 )

input: A fish on a leaflet loaned the cat the donut beside the stage .
actual:   fish ( 1 ) ; leaflet ( 4 ) ; * cat ( 7 ) ; * donut ( 9 ) ; * stage ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: fish ( 1 ) ; leaflet ( 4 ) ; * cat ( 7 ) ; * donut ( 9 ) ; * stage ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: A girl on the dog handed a cat the raisin on a table .
actual:   girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )
expected: girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )

input: A host beside a table smiled .
actual:   host ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: host ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A girl in the house slept .
actual:   girl ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A bear in the car froze the key on the table .
actual:   bear ( 1 ) ; * car ( 4 ) ; * key ( 7 ) ; * table ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: bear ( 1 ) ; * car ( 4 ) ; * key ( 7 ) ; * table ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The girl beside the bed lended the manager the leaf .
actual:   * girl ( 1 ) ; * bed ( 4 ) ; * manager ( 7 ) ; * leaf ( 9 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
expected: * girl ( 1 ) ; * bed ( 4 ) ; * manager ( 7 ) ; * leaf ( 9 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )

input: A goose in a spaceship gasped .
actual:   goose ( 1 ) ; spaceship ( 4 ) ; nmod . in ( 1 , 4 ) AND gasp ( 5 ) AND agent ( 5 , 4 )
expected: goose ( 1 ) ; spaceship ( 4 ) ; nmod . in ( 1 , 4 ) AND gasp ( 5 ) AND agent ( 5 , 1 )

input: A baby on a truck slept .
actual:   baby ( 1 ) ; truck ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: baby ( 1 ) ; truck ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A girl in a hole slept .
actual:   girl ( 1 ) ; hole ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; hole ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A cat on a bag cleaned a chemical in a house .
actual:   cat ( 1 ) ; bag ( 4 ) ; chemical ( 7 ) ; house ( 10 ) ; nmod . on ( 1 , 4 ) AND clean ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: cat ( 1 ) ; bag ( 4 ) ; chemical ( 7 ) ; house ( 10 ) ; nmod . on ( 1 , 4 ) AND clean ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A scientist on the desk admired the cake beside the chair .
actual:   scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A bear beside a chair napped .
actual:   bear ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND nap ( 5 ) AND agent ( 5 , 4 )
expected: bear ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND nap ( 5 ) AND agent ( 5 , 1 )

input: A horse on the cake investigated the melon on a box .
actual:   horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl in a cage knew .
actual:   girl ( 1 ) ; cage ( 4 ) ; nmod . in ( 1 , 4 ) AND know ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; cage ( 4 ) ; nmod . in ( 1 , 4 ) AND know ( 5 ) AND agent ( 5 , 1 )

input: The monster beside a road smiled .
actual:   * monster ( 1 ) ; road ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: * monster ( 1 ) ; road ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The girl in the house liked a cake beside a bed .
actual:   * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A student on a tree sneezed .
actual:   student ( 1 ) ; tree ( 4 ) ; nmod . on ( 1 , 4 ) AND sneeze ( 5 ) AND agent ( 5 , 4 )
expected: student ( 1 ) ; tree ( 4 ) ; nmod . on ( 1 , 4 ) AND sneeze ( 5 ) AND agent ( 5 , 1 )

input: A girl in a car ate .
actual:   girl ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 )

input: A boy in the trailer poked the girl beside a table .
actual:   boy ( 1 ) ; * trailer ( 4 ) ; * girl ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: boy ( 1 ) ; * trailer ( 4 ) ; * girl ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The champion beside a table liked a cake on the computer .
actual:   * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The boy in the vase sent the cake on a table to a cat .
actual:   * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
expected: * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )

input: A student in a pot liked the girl on a chair .
actual:   student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A cat on a sofa slept .
actual:   cat ( 1 ) ; sofa ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: cat ( 1 ) ; sofa ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The frog on a mattress ate the radio on the bike .
actual:   * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl on a boat cooked .
actual:   girl ( 1 ) ; boat ( 4 ) ; nmod . on ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; boat ( 4 ) ; nmod . on ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 1 )

input: The penguin in the drawer rolled the donut beside the computer .
actual:   * penguin ( 1 ) ; * drawer ( 4 ) ; * donut ( 7 ) ; * computer ( 10 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * penguin ( 1 ) ; * drawer ( 4 ) ; * donut ( 7 ) ; * computer ( 10 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A girl on the surface screamed .
actual:   girl ( 1 ) ; * surface ( 4 ) ; nmod . on ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; * surface ( 4 ) ; nmod . on ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 1 )

input: The girl beside a table slept .
actual:   * girl ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * girl ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The horse on the stack loaned the lollipop on a table to Isaac .
actual:   * horse ( 1 ) ; * stack ( 4 ) ; * lollipop ( 7 ) ; table ( 10 ) ; Isaac ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
expected: * horse ( 1 ) ; * stack ( 4 ) ; * lollipop ( 7 ) ; table ( 10 ) ; Isaac ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )

input: The teacher on the table gave Liam a cake on the tripod .
actual:   * teacher ( 1 ) ; * table ( 4 ) ; Liam ( 6 ) ; cake ( 8 ) ; * tripod ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
expected: * teacher ( 1 ) ; * table ( 4 ) ; Liam ( 6 ) ; cake ( 8 ) ; * tripod ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )

input: A girl on a rock smiled .
actual:   girl ( 1 ) ; rock ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; rock ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The boy beside a cabinet danced .
actual:   * boy ( 1 ) ; cabinet ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 4 )
expected: * boy ( 1 ) ; cabinet ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 1 )

input: The girl beside the stage found the banana in a bucket .
actual:   * girl ( 1 ) ; * stage ( 4 ) ; * banana ( 7 ) ; bucket ( 10 ) ; nmod . beside ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * girl ( 1 ) ; * stage ( 4 ) ; * banana ( 7 ) ; bucket ( 10 ) ; nmod . beside ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The frog on the table gave a cake beside the bottle to James .
actual:   * frog ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * bottle ( 10 ) ; James ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
expected: * frog ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * bottle ( 10 ) ; James ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )

input: The dog on the stage ate the boy on a seat .
actual:   * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A bird on a train liked a cake beside a box .
actual:   bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The girl on a booklet walked .
actual:   * girl ( 1 ) ; booklet ( 4 ) ; nmod . on ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 4 )
expected: * girl ( 1 ) ; booklet ( 4 ) ; nmod . on ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 1 )

input: A squirrel on a computer drew .
actual:   squirrel ( 1 ) ; computer ( 4 ) ; nmod . on ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 4 )
expected: squirrel ( 1 ) ; computer ( 4 ) ; nmod . on ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 1 )

input: The horse on a bed slept .
actual:   * horse ( 1 ) ; bed ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * horse ( 1 ) ; bed ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A bear on the seat discovered a boy beside a stage .
actual:   bear ( 1 ) ; * seat ( 4 ) ; boy ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND discover ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: bear ( 1 ) ; * seat ( 4 ) ; boy ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND discover ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The turkey in the storage held a cake beside a table .
actual:   * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The girl in a box liked the donut beside a stage .
actual:   * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The chicken on the table poked the child in a cup .
actual:   * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The frog beside a doll slept .
actual:   * frog ( 1 ) ; doll ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * frog ( 1 ) ; doll ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The monkey on the futon gave the cat a pretzel .
actual:   * monkey ( 1 ) ; * futon ( 4 ) ; * cat ( 7 ) ; pretzel ( 9 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
expected: * monkey ( 1 ) ; * futon ( 4 ) ; * cat ( 7 ) ; pretzel ( 9 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )

input: A boy in the haystack slept .
actual:   boy ( 1 ) ; * haystack ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: boy ( 1 ) ; * haystack ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A dog on the stage snored .
actual:   dog ( 1 ) ; * stage ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 4 )
expected: dog ( 1 ) ; * stage ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 1 )

input: A dog in the wardrobe smiled .
actual:   dog ( 1 ) ; * wardrobe ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: dog ( 1 ) ; * wardrobe ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A girl on the table ate the ball in a cafe .
actual:   girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The girl in the taxi slept .
actual:   * girl ( 1 ) ; * taxi ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * girl ( 1 ) ; * taxi ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A child on a table gave Scarlett a balloon beside a lemon .
actual:   child ( 1 ) ; table ( 4 ) ; Scarlett ( 6 ) ; balloon ( 8 ) ; lemon ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
expected: child ( 1 ) ; table ( 4 ) ; Scarlett ( 6 ) ; balloon ( 8 ) ; lemon ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )

input: The dog on a chair ate a jigsaw on the paper .
actual:   * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl on a table smiled .
actual:   girl ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A tiger on a bible slept .
actual:   tiger ( 1 ) ; bible ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: tiger ( 1 ) ; bible ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The girl beside the road cried .
actual:   * girl ( 1 ) ; * road ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: * girl ( 1 ) ; * road ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: A servant in a car heard .
actual:   servant ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND hear ( 5 ) AND agent ( 5 , 4 )
expected: servant ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND hear ( 5 ) AND agent ( 5 , 1 )

input: The cat on the canvas gave the glue beside a table to a girl .
actual:   * cat ( 1 ) ; * canvas ( 4 ) ; * glue ( 7 ) ; table ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
expected: * cat ( 1 ) ; * canvas ( 4 ) ; * glue ( 7 ) ; table ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )

input: A girl beside the table saw the cat in a car .
actual:   girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The cat on the tabletop sold the princess a cake beside a monkey .
actual:   * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: A girl beside a sword ate a fruit in the house .
actual:   girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A girl on the rock walked .
actual:   girl ( 1 ) ; * rock ( 4 ) ; nmod . on ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; * rock ( 4 ) ; nmod . on ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 1 )

input: A mouse in a container drew .
actual:   mouse ( 1 ) ; container ( 4 ) ; nmod . in ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 4 )
expected: mouse ( 1 ) ; container ( 4 ) ; nmod . in ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 1 )

input: A girl in the room cried .
actual:   girl ( 1 ) ; * room ( 4 ) ; nmod . in ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; * room ( 4 ) ; nmod . in ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: The mouse in the crate liked a professor on the road .
actual:   * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The girl in a house scoffed .
actual:   * girl ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 4 )
expected: * girl ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 1 )

input: The girl on a tray served the cat a cake .
actual:   * girl ( 1 ) ; tray ( 4 ) ; * cat ( 7 ) ; cake ( 9 ) ; nmod . on ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
expected: * girl ( 1 ) ; tray ( 4 ) ; * cat ( 7 ) ; cake ( 9 ) ; nmod . on ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )

input: A puppy in the car juggled .
actual:   puppy ( 1 ) ; * car ( 4 ) ; nmod . in ( 1 , 4 ) AND juggle ( 5 ) AND agent ( 5 , 4 )
expected: puppy ( 1 ) ; * car ( 4 ) ; nmod . in ( 1 , 4 ) AND juggle ( 5 ) AND agent ( 5 , 1 )

input: A girl in the car liked a bottle in the house .
actual:   girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The dog on a table snored .
actual:   * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 4 )
expected: * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 1 )

input: A girl on the surface cried .
actual:   girl ( 1 ) ; * surface ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; * surface ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: A frog in a bag sketched .
actual:   frog ( 1 ) ; bag ( 4 ) ; nmod . in ( 1 , 4 ) AND sketch ( 5 ) AND agent ( 5 , 4 )
expected: frog ( 1 ) ; bag ( 4 ) ; nmod . in ( 1 , 4 ) AND sketch ( 5 ) AND agent ( 5 , 1 )

input: The boy beside the whale slept .
actual:   * boy ( 1 ) ; * whale ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * boy ( 1 ) ; * whale ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The consumer on the bed gave Evelyn a molecule beside the duck .
actual:   * consumer ( 1 ) ; * bed ( 4 ) ; Evelyn ( 6 ) ; molecule ( 8 ) ; * duck ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
expected: * consumer ( 1 ) ; * bed ( 4 ) ; Evelyn ( 6 ) ; molecule ( 8 ) ; * duck ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )

input: The lion beside a piano gave the girl the donut .
actual:   * lion ( 1 ) ; piano ( 4 ) ; * girl ( 7 ) ; * donut ( 9 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
expected: * lion ( 1 ) ; piano ( 4 ) ; * girl ( 7 ) ; * donut ( 9 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )

input: A girl on the panel drew .
actual:   girl ( 1 ) ; * panel ( 4 ) ; nmod . on ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; * panel ( 4 ) ; nmod . on ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 1 )

input: A boy beside a chair laughed .
actual:   boy ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 4 )
expected: boy ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 1 )

input: The child beside a chair ate the rose beside a shoe .
actual:   * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A girl on a surface sketched .
actual:   girl ( 1 ) ; surface ( 4 ) ; nmod . on ( 1 , 4 ) AND sketch ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; surface ( 4 ) ; nmod . on ( 1 , 4 ) AND sketch ( 5 ) AND agent ( 5 , 1 )

input: The dog on a table scoffed .
actual:   * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 4 )
expected: * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 1 )

input: A boy on a bed sent the cat a donut .
actual:   boy ( 1 ) ; bed ( 4 ) ; * cat ( 7 ) ; donut ( 9 ) ; nmod . on ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
expected: boy ( 1 ) ; bed ( 4 ) ; * cat ( 7 ) ; donut ( 9 ) ; nmod . on ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )

input: A teacher on the table cried .
actual:   teacher ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: teacher ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: A teacher beside a table danced .
actual:   teacher ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 4 )
expected: teacher ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 1 )

input: A boy on the surface gave the girl a bell .
actual:   boy ( 1 ) ; * surface ( 4 ) ; * girl ( 7 ) ; bell ( 9 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
expected: boy ( 1 ) ; * surface ( 4 ) ; * girl ( 7 ) ; bell ( 9 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )

input: The kid on a trampoline slept .
actual:   * kid ( 1 ) ; trampoline ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * kid ( 1 ) ; trampoline ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A child in a car smiled .
actual:   child ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: child ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The girl in the tub lended Emma the cake .
actual:   * girl ( 1 ) ; * tub ( 4 ) ; Emma ( 6 ) ; * cake ( 8 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )
expected: * girl ( 1 ) ; * tub ( 4 ) ; Emma ( 6 ) ; * cake ( 8 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )

input: A girl beside a stage cooked a cake in the shoe .
actual:   girl ( 1 ) ; stage ( 4 ) ; cake ( 7 ) ; * shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; stage ( 4 ) ; cake ( 7 ) ; * shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The mouse on a table gave the donut in the nest to a cat .
actual:   * mouse ( 1 ) ; table ( 4 ) ; * donut ( 7 ) ; * nest ( 10 ) ; cat ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )
expected: * mouse ( 1 ) ; table ( 4 ) ; * donut ( 7 ) ; * nest ( 10 ) ; cat ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )

input: A girl on the chair slept .
actual:   girl ( 1 ) ; * chair ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; * chair ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A queen on a device screamed .
actual:   queen ( 1 ) ; device ( 4 ) ; nmod . on ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 4 )
expected: queen ( 1 ) ; device ( 4 ) ; nmod . on ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 1 )

input: A buyer beside the table rolled the cake in the backpack .
actual:   buyer ( 1 ) ; * table ( 4 ) ; * cake ( 7 ) ; * backpack ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: buyer ( 1 ) ; * table ( 4 ) ; * cake ( 7 ) ; * backpack ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A fish on a leaflet loaned the cat the donut beside the stage .
actual:   fish ( 1 ) ; leaflet ( 4 ) ; * cat ( 7 ) ; * donut ( 9 ) ; * stage ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: fish ( 1 ) ; leaflet ( 4 ) ; * cat ( 7 ) ; * donut ( 9 ) ; * stage ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: A priest on the box admired a cake on the table .
actual:   priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl on a stool observed .
actual:   girl ( 1 ) ; stool ( 4 ) ; nmod . on ( 1 , 4 ) AND observe ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; stool ( 4 ) ; nmod . on ( 1 , 4 ) AND observe ( 5 ) AND agent ( 5 , 1 )

input: A girl on the dog handed a cat the raisin on a table .
actual:   girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )
expected: girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )

input: The boy on a towel gave the frog the cake on a booklet .
actual:   * boy ( 1 ) ; towel ( 4 ) ; * frog ( 7 ) ; * cake ( 9 ) ; booklet ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )
expected: * boy ( 1 ) ; towel ( 4 ) ; * frog ( 7 ) ; * cake ( 9 ) ; booklet ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )

input: A director in a house walked .
actual:   director ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 4 )
expected: director ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 1 )

input: A host beside a table smiled .
actual:   host ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: host ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The baby on the stage gave the girl a cake .
actual:   * baby ( 1 ) ; * stage ( 4 ) ; * girl ( 7 ) ; cake ( 9 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
expected: * baby ( 1 ) ; * stage ( 4 ) ; * girl ( 7 ) ; cake ( 9 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )

input: A scientist on the desk admired the cake beside the chair .
actual:   scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A bear beside a chair napped .
actual:   bear ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND nap ( 5 ) AND agent ( 5 , 4 )
expected: bear ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND nap ( 5 ) AND agent ( 5 , 1 )

input: The boy in the vase sent the cake on a table to a cat .
actual:   * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
expected: * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )

input: The politician beside the book cried .
actual:   * politician ( 1 ) ; * book ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: * politician ( 1 ) ; * book ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: The horse on the stack loaned the lollipop on a table to Isaac .
actual:   * horse ( 1 ) ; * stack ( 4 ) ; * lollipop ( 7 ) ; table ( 10 ) ; Isaac ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
expected: * horse ( 1 ) ; * stack ( 4 ) ; * lollipop ( 7 ) ; table ( 10 ) ; Isaac ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )

input: A girl on a rock smiled .
actual:   girl ( 1 ) ; rock ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; rock ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The boy beside a cabinet danced .
actual:   * boy ( 1 ) ; cabinet ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 4 )
expected: * boy ( 1 ) ; cabinet ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 1 )

input: The dog beside the table cried .
actual:   * dog ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: * dog ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: The prince in a bin smiled .
actual:   * prince ( 1 ) ; bin ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: * prince ( 1 ) ; bin ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The turkey in the storage held a cake beside a table .
actual:   * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A dog on the stage snored .
actual:   dog ( 1 ) ; * stage ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 4 )
expected: dog ( 1 ) ; * stage ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 1 )

input: A girl on a table smiled .
actual:   girl ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The girl beside the road cried .
actual:   * girl ( 1 ) ; * road ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: * girl ( 1 ) ; * road ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: The cat on the tabletop sold the princess a cake beside a monkey .
actual:   * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: The mouse in the crate liked a professor on the road .
actual:   * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl in the car liked a bottle in the house .
actual:   girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A boy beside a chair laughed .
actual:   boy ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 4 )
expected: boy ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 1 )

input: A child in a car smiled .
actual:   child ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: child ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A fish on a leaflet loaned the cat the donut beside the stage .
actual:   fish ( 1 ) ; leaflet ( 4 ) ; * cat ( 7 ) ; * donut ( 9 ) ; * stage ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: fish ( 1 ) ; leaflet ( 4 ) ; * cat ( 7 ) ; * donut ( 9 ) ; * stage ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: A girl on the dog handed a cat the raisin on a table .
actual:   girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )
expected: girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )

input: A host beside a table smiled .
actual:   host ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: host ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A donkey in the room sold Ella a donut .
actual:   donkey ( 1 ) ; * room ( 4 ) ; Ella ( 6 ) ; donut ( 8 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )
expected: donkey ( 1 ) ; * room ( 4 ) ; Ella ( 6 ) ; donut ( 8 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )

input: The dog in a bakery in the bag sneezed .
actual:   * dog ( 1 ) ; bakery ( 4 ) ; * bag ( 7 ) ; nmod . in ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND sneeze ( 8 ) AND agent ( 8 , 7 )
expected: * dog ( 1 ) ; bakery ( 4 ) ; * bag ( 7 ) ; nmod . in ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND sneeze ( 8 ) AND agent ( 8 , 1 )

input: The sailor in a house lended a biscuit on a table to a goose .
actual:   * sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
expected: * sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )

input: A bear in the car froze the key on the table .
actual:   bear ( 1 ) ; * car ( 4 ) ; * key ( 7 ) ; * table ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: bear ( 1 ) ; * car ( 4 ) ; * key ( 7 ) ; * table ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The girl beside the bed lended the manager the leaf .
actual:   * girl ( 1 ) ; * bed ( 4 ) ; * manager ( 7 ) ; * leaf ( 9 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
expected: * girl ( 1 ) ; * bed ( 4 ) ; * manager ( 7 ) ; * leaf ( 9 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )

input: A mouse beside the table ate .
actual:   mouse ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 )
expected: mouse ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 )

input: The baby beside a valve painted the cake .
actual:   * baby ( 1 ) ; valve ( 4 ) ; * cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * baby ( 1 ) ; valve ( 4 ) ; * cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A baby in a garden called the raisin .
actual:   baby ( 1 ) ; garden ( 4 ) ; * raisin ( 7 ) ; nmod . in ( 1 , 4 ) AND call ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: baby ( 1 ) ; garden ( 4 ) ; * raisin ( 7 ) ; nmod . in ( 1 , 4 ) AND call ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A girl in the house knew a cake .
actual:   girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; nmod . in ( 1 , 4 ) AND know ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; nmod . in ( 1 , 4 ) AND know ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The child in a drawer gave Amelia a box beside the machine .
actual:   * child ( 1 ) ; drawer ( 4 ) ; Amelia ( 6 ) ; box ( 8 ) ; * machine ( 11 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
expected: * child ( 1 ) ; drawer ( 4 ) ; Amelia ( 6 ) ; box ( 8 ) ; * machine ( 11 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )

input: A cat on a bag cleaned a chemical in a house .
actual:   cat ( 1 ) ; bag ( 4 ) ; chemical ( 7 ) ; house ( 10 ) ; nmod . on ( 1 , 4 ) AND clean ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: cat ( 1 ) ; bag ( 4 ) ; chemical ( 7 ) ; house ( 10 ) ; nmod . on ( 1 , 4 ) AND clean ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A frog beside the table cried .
actual:   frog ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: frog ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: A boy on the bed sketched .
actual:   boy ( 1 ) ; * bed ( 4 ) ; nmod . on ( 1 , 4 ) AND sketch ( 5 ) AND agent ( 5 , 4 )
expected: boy ( 1 ) ; * bed ( 4 ) ; nmod . on ( 1 , 4 ) AND sketch ( 5 ) AND agent ( 5 , 1 )

input: A girl beside a rock passed Dylan a pen on a box .
actual:   girl ( 1 ) ; rock ( 4 ) ; Dylan ( 6 ) ; pen ( 8 ) ; box ( 11 ) ; nmod . beside ( 1 , 4 ) AND pass ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
expected: girl ( 1 ) ; rock ( 4 ) ; Dylan ( 6 ) ; pen ( 8 ) ; box ( 11 ) ; nmod . beside ( 1 , 4 ) AND pass ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )

input: A driver beside the bed smiled .
actual:   driver ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: driver ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A scientist on the desk admired the cake beside the chair .
actual:   scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A horse on the cake investigated the melon on a box .
actual:   horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The researcher in a room ate the baby .
actual:   * researcher ( 1 ) ; room ( 4 ) ; * baby ( 7 ) ; nmod . in ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * researcher ( 1 ) ; room ( 4 ) ; * baby ( 7 ) ; nmod . in ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The monster beside a road smiled .
actual:   * monster ( 1 ) ; road ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: * monster ( 1 ) ; road ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The girl in the house liked a cake beside a bed .
actual:   * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A girl in the house forwarded Victoria a gumball in the shoe .
actual:   girl ( 1 ) ; * house ( 4 ) ; Victoria ( 6 ) ; gumball ( 8 ) ; * shoe ( 11 ) ; nmod . in ( 1 , 4 ) AND forward ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . in ( 8 , 11 )
expected: girl ( 1 ) ; * house ( 4 ) ; Victoria ( 6 ) ; gumball ( 8 ) ; * shoe ( 11 ) ; nmod . in ( 1 , 4 ) AND forward ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . in ( 8 , 11 )

input: A boy in the trailer poked the girl beside a table .
actual:   boy ( 1 ) ; * trailer ( 4 ) ; * girl ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: boy ( 1 ) ; * trailer ( 4 ) ; * girl ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The champion beside a table liked a cake on the computer .
actual:   * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The boy in the vase sent the cake on a table to a cat .
actual:   * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
expected: * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )

input: The child on the pad ate the cat .
actual:   * child ( 1 ) ; * pad ( 4 ) ; * cat ( 7 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * child ( 1 ) ; * pad ( 4 ) ; * cat ( 7 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A student in a pot liked the girl on a chair .
actual:   student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A teacher beside the table burned the producer on the road .
actual:   teacher ( 1 ) ; * table ( 4 ) ; * producer ( 7 ) ; * road ( 10 ) ; nmod . beside ( 1 , 4 ) AND burn ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: teacher ( 1 ) ; * table ( 4 ) ; * producer ( 7 ) ; * road ( 10 ) ; nmod . beside ( 1 , 4 ) AND burn ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The wolf in the house offered the donut on the dish to Sophia .
actual:   * wolf ( 1 ) ; * house ( 4 ) ; * donut ( 7 ) ; * dish ( 10 ) ; Sophia ( 12 ) ; nmod . in ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
expected: * wolf ( 1 ) ; * house ( 4 ) ; * donut ( 7 ) ; * dish ( 10 ) ; Sophia ( 12 ) ; nmod . in ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )

input: The frog on a mattress ate the radio on the bike .
actual:   * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A horse in the van ate .
actual:   horse ( 1 ) ; * van ( 4 ) ; nmod . in ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 )
expected: horse ( 1 ) ; * van ( 4 ) ; nmod . in ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 )

input: The penguin in the drawer rolled the donut beside the computer .
actual:   * penguin ( 1 ) ; * drawer ( 4 ) ; * donut ( 7 ) ; * computer ( 10 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * penguin ( 1 ) ; * drawer ( 4 ) ; * donut ( 7 ) ; * computer ( 10 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A dog beside the seat screamed .
actual:   dog ( 1 ) ; * seat ( 4 ) ; nmod . beside ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 4 )
expected: dog ( 1 ) ; * seat ( 4 ) ; nmod . beside ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 1 )

input: The frog on a cot dusted a cookie .
actual:   * frog ( 1 ) ; cot ( 4 ) ; cookie ( 7 ) ; nmod . on ( 1 , 4 ) AND dust ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * frog ( 1 ) ; cot ( 4 ) ; cookie ( 7 ) ; nmod . on ( 1 , 4 ) AND dust ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The cat in a house adored the donut on a stage .
actual:   * cat ( 1 ) ; house ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * cat ( 1 ) ; house ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl on the surface screamed .
actual:   girl ( 1 ) ; * surface ( 4 ) ; nmod . on ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; * surface ( 4 ) ; nmod . on ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 1 )

input: The horse on the stack loaned the lollipop on a table to Isaac .
actual:   * horse ( 1 ) ; * stack ( 4 ) ; * lollipop ( 7 ) ; table ( 10 ) ; Isaac ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
expected: * horse ( 1 ) ; * stack ( 4 ) ; * lollipop ( 7 ) ; table ( 10 ) ; Isaac ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )

input: The cat on the table awarded a cake on the stand to Oliver .
actual:   * cat ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * stand ( 10 ) ; Oliver ( 12 ) ; nmod . on ( 1 , 4 ) AND award ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
expected: * cat ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * stand ( 10 ) ; Oliver ( 12 ) ; nmod . on ( 1 , 4 ) AND award ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )

input: The cat in a house studied a boy .
actual:   * cat ( 1 ) ; house ( 4 ) ; boy ( 7 ) ; nmod . in ( 1 , 4 ) AND study ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * cat ( 1 ) ; house ( 4 ) ; boy ( 7 ) ; nmod . in ( 1 , 4 ) AND study ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The teacher on the table gave Liam a cake on the tripod .
actual:   * teacher ( 1 ) ; * table ( 4 ) ; Liam ( 6 ) ; cake ( 8 ) ; * tripod ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
expected: * teacher ( 1 ) ; * table ( 4 ) ; Liam ( 6 ) ; cake ( 8 ) ; * tripod ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )

input: The deer in a house hunted a melon .
actual:   * deer ( 1 ) ; house ( 4 ) ; melon ( 7 ) ; nmod . in ( 1 , 4 ) AND hunt ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * deer ( 1 ) ; house ( 4 ) ; melon ( 7 ) ; nmod . in ( 1 , 4 ) AND hunt ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The boy beside a cabinet danced .
actual:   * boy ( 1 ) ; cabinet ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 4 )
expected: * boy ( 1 ) ; cabinet ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 1 )

input: The girl beside the stage found the banana in a bucket .
actual:   * girl ( 1 ) ; * stage ( 4 ) ; * banana ( 7 ) ; bucket ( 10 ) ; nmod . beside ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * girl ( 1 ) ; * stage ( 4 ) ; * banana ( 7 ) ; bucket ( 10 ) ; nmod . beside ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The fish beside the seat offered the hamburger beside a key to a frog .
actual:   * fish ( 1 ) ; * seat ( 4 ) ; * hamburger ( 7 ) ; key ( 10 ) ; frog ( 13 ) ; nmod . beside ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
expected: * fish ( 1 ) ; * seat ( 4 ) ; * hamburger ( 7 ) ; key ( 10 ) ; frog ( 13 ) ; nmod . beside ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )

input: The frog on the table gave a cake beside the bottle to James .
actual:   * frog ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * bottle ( 10 ) ; James ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
expected: * frog ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * bottle ( 10 ) ; James ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )

input: A boy on a plate sketched a chicken .
actual:   boy ( 1 ) ; plate ( 4 ) ; chicken ( 7 ) ; nmod . on ( 1 , 4 ) AND sketch ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: boy ( 1 ) ; plate ( 4 ) ; chicken ( 7 ) ; nmod . on ( 1 , 4 ) AND sketch ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A boy in the house lended the mouse the cake beside a seat .
actual:   boy ( 1 ) ; * house ( 4 ) ; * mouse ( 7 ) ; * cake ( 9 ) ; seat ( 12 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: boy ( 1 ) ; * house ( 4 ) ; * mouse ( 7 ) ; * cake ( 9 ) ; seat ( 12 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: The dog on the stage ate the boy on a seat .
actual:   * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The dog in a garden ate .
actual:   * dog ( 1 ) ; garden ( 4 ) ; nmod . in ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 )
expected: * dog ( 1 ) ; garden ( 4 ) ; nmod . in ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 )

input: A bird on a train liked a cake beside a box .
actual:   bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The dog beside the table cried .
actual:   * dog ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: * dog ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: The cat on a boat gave the box on a table to a boy .
actual:   * cat ( 1 ) ; boat ( 4 ) ; * box ( 7 ) ; table ( 10 ) ; boy ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
expected: * cat ( 1 ) ; boat ( 4 ) ; * box ( 7 ) ; table ( 10 ) ; boy ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )

input: A girl in a room sent a frog a cake beside the pillar .
actual:   girl ( 1 ) ; room ( 4 ) ; frog ( 7 ) ; cake ( 9 ) ; * pillar ( 12 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: girl ( 1 ) ; room ( 4 ) ; frog ( 7 ) ; cake ( 9 ) ; * pillar ( 12 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: The girl on a tree offered the boy the banana beside a table .
actual:   * girl ( 1 ) ; tree ( 4 ) ; * boy ( 7 ) ; * banana ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: * girl ( 1 ) ; tree ( 4 ) ; * boy ( 7 ) ; * banana ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: The prince in a bin smiled .
actual:   * prince ( 1 ) ; bin ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: * prince ( 1 ) ; bin ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A cat in a bag found a book in the well .
actual:   cat ( 1 ) ; bag ( 4 ) ; book ( 7 ) ; * well ( 10 ) ; nmod . in ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: cat ( 1 ) ; bag ( 4 ) ; book ( 7 ) ; * well ( 10 ) ; nmod . in ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The girl beside a stage lended the cake in the house to Liam .
actual:   * girl ( 1 ) ; stage ( 4 ) ; * cake ( 7 ) ; * house ( 10 ) ; Liam ( 12 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . in ( 7 , 10 )
expected: * girl ( 1 ) ; stage ( 4 ) ; * cake ( 7 ) ; * house ( 10 ) ; Liam ( 12 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . in ( 7 , 10 )

input: The girl beside the tree in the bookstore slept .
actual:   * girl ( 1 ) ; * tree ( 4 ) ; * bookstore ( 7 ) ; nmod . beside ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND sleep ( 8 ) AND agent ( 8 , 7 )
expected: * girl ( 1 ) ; * tree ( 4 ) ; * bookstore ( 7 ) ; nmod . beside ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND sleep ( 8 ) AND agent ( 8 , 1 )

input: A bear on the seat discovered a boy beside a stage .
actual:   bear ( 1 ) ; * seat ( 4 ) ; boy ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND discover ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: bear ( 1 ) ; * seat ( 4 ) ; boy ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND discover ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A cat on the bed decomposed the cake in the cylinder .
actual:   cat ( 1 ) ; * bed ( 4 ) ; * cake ( 7 ) ; * cylinder ( 10 ) ; nmod . on ( 1 , 4 ) AND decompose ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: cat ( 1 ) ; * bed ( 4 ) ; * cake ( 7 ) ; * cylinder ( 10 ) ; nmod . on ( 1 , 4 ) AND decompose ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A girl beside a boat drew a soap .
actual:   girl ( 1 ) ; boat ( 4 ) ; soap ( 7 ) ; nmod . beside ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: girl ( 1 ) ; boat ( 4 ) ; soap ( 7 ) ; nmod . beside ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The turkey in the storage held a cake beside a table .
actual:   * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The girl in a box liked the donut beside a stage .
actual:   * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The chicken on the table poked the child in a cup .
actual:   * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A child beside the table rolled the student in the tin .
actual:   child ( 1 ) ; * table ( 4 ) ; * student ( 7 ) ; * tin ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: child ( 1 ) ; * table ( 4 ) ; * student ( 7 ) ; * tin ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A girl in a envelope sold Liam the cake beside the computer .
actual:   girl ( 1 ) ; envelope ( 4 ) ; Liam ( 6 ) ; * cake ( 8 ) ; * computer ( 11 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
expected: girl ( 1 ) ; envelope ( 4 ) ; Liam ( 6 ) ; * cake ( 8 ) ; * computer ( 11 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )

input: A girl beside the table gave a mouse a mirror in the saucepan .
actual:   girl ( 1 ) ; * table ( 4 ) ; mouse ( 7 ) ; mirror ( 9 ) ; * saucepan ( 12 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . in ( 9 , 12 )
expected: girl ( 1 ) ; * table ( 4 ) ; mouse ( 7 ) ; mirror ( 9 ) ; * saucepan ( 12 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . in ( 9 , 12 )

input: A dog on the stage snored .
actual:   dog ( 1 ) ; * stage ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 4 )
expected: dog ( 1 ) ; * stage ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 1 )

input: A dog in the wardrobe smiled .
actual:   dog ( 1 ) ; * wardrobe ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: dog ( 1 ) ; * wardrobe ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A girl on the table ate the ball in a cafe .
actual:   girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A boy beside the seat drew .
actual:   boy ( 1 ) ; * seat ( 4 ) ; nmod . beside ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 4 )
expected: boy ( 1 ) ; * seat ( 4 ) ; nmod . beside ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 1 )

input: A child on a table gave Scarlett a balloon beside a lemon .
actual:   child ( 1 ) ; table ( 4 ) ; Scarlett ( 6 ) ; balloon ( 8 ) ; lemon ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
expected: child ( 1 ) ; table ( 4 ) ; Scarlett ( 6 ) ; balloon ( 8 ) ; lemon ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )

input: A boy on the stage observed the donut .
actual:   boy ( 1 ) ; * stage ( 4 ) ; * donut ( 7 ) ; nmod . on ( 1 , 4 ) AND observe ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: boy ( 1 ) ; * stage ( 4 ) ; * donut ( 7 ) ; nmod . on ( 1 , 4 ) AND observe ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A boy on the stage nursed a cookie .
actual:   boy ( 1 ) ; * stage ( 4 ) ; cookie ( 7 ) ; nmod . on ( 1 , 4 ) AND nurse ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: boy ( 1 ) ; * stage ( 4 ) ; cookie ( 7 ) ; nmod . on ( 1 , 4 ) AND nurse ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A bunny on the tree drew .
actual:   bunny ( 1 ) ; * tree ( 4 ) ; nmod . on ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 4 )
expected: bunny ( 1 ) ; * tree ( 4 ) ; nmod . on ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 1 )

input: The dog on a chair ate a jigsaw on the paper .
actual:   * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The boy beside a yacht cleaned .
actual:   * boy ( 1 ) ; yacht ( 4 ) ; nmod . beside ( 1 , 4 ) AND clean ( 5 ) AND agent ( 5 , 4 )
expected: * boy ( 1 ) ; yacht ( 4 ) ; nmod . beside ( 1 , 4 ) AND clean ( 5 ) AND agent ( 5 , 1 )

input: A deer beside the table gave Emma a sweetcorn in the garden .
actual:   deer ( 1 ) ; * table ( 4 ) ; Emma ( 6 ) ; sweetcorn ( 8 ) ; * garden ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . in ( 8 , 11 )
expected: deer ( 1 ) ; * table ( 4 ) ; Emma ( 6 ) ; sweetcorn ( 8 ) ; * garden ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . in ( 8 , 11 )

input: The girl beside the road cried .
actual:   * girl ( 1 ) ; * road ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: * girl ( 1 ) ; * road ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: The boy on a table called .
actual:   * boy ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND call ( 5 ) AND agent ( 5 , 4 )
expected: * boy ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND call ( 5 ) AND agent ( 5 , 1 )

input: A child beside a stage gave Emma a donut beside the house .
actual:   child ( 1 ) ; stage ( 4 ) ; Emma ( 6 ) ; donut ( 8 ) ; * house ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
expected: child ( 1 ) ; stage ( 4 ) ; Emma ( 6 ) ; donut ( 8 ) ; * house ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )

input: The cat on a bible ate the donut .
actual:   * cat ( 1 ) ; bible ( 4 ) ; * donut ( 7 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * cat ( 1 ) ; bible ( 4 ) ; * donut ( 7 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A professor beside the bed smiled .
actual:   professor ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: professor ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A boy on a table drew a baby .
actual:   boy ( 1 ) ; table ( 4 ) ; baby ( 7 ) ; nmod . on ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: boy ( 1 ) ; table ( 4 ) ; baby ( 7 ) ; nmod . on ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The cat on the canvas gave the glue beside a table to a girl .
actual:   * cat ( 1 ) ; * canvas ( 4 ) ; * glue ( 7 ) ; table ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
expected: * cat ( 1 ) ; * canvas ( 4 ) ; * glue ( 7 ) ; table ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )

input: A girl beside the table saw the cat in a car .
actual:   girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The teacher in a house awarded a cookie beside a seat to the bee .
actual:   * teacher ( 1 ) ; house ( 4 ) ; cookie ( 7 ) ; seat ( 10 ) ; * bee ( 13 ) ; nmod . in ( 1 , 4 ) AND award ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
expected: * teacher ( 1 ) ; house ( 4 ) ; cookie ( 7 ) ; seat ( 10 ) ; * bee ( 13 ) ; nmod . in ( 1 , 4 ) AND award ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )

input: The cat on the tabletop sold the princess a cake beside a monkey .
actual:   * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: A girl beside a sword ate a fruit in the house .
actual:   girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A cat in the blender ate .
actual:   cat ( 1 ) ; * blender ( 4 ) ; nmod . in ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 )
expected: cat ( 1 ) ; * blender ( 4 ) ; nmod . in ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 )

input: The boy beside a bed gave Audrey a cake on the pedestal .
actual:   * boy ( 1 ) ; bed ( 4 ) ; Audrey ( 6 ) ; cake ( 8 ) ; * pedestal ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
expected: * boy ( 1 ) ; bed ( 4 ) ; Audrey ( 6 ) ; cake ( 8 ) ; * pedestal ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )

input: The girl on a table liked a journalist on a stage .
actual:   * girl ( 1 ) ; table ( 4 ) ; journalist ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * girl ( 1 ) ; table ( 4 ) ; journalist ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl in the room cried .
actual:   girl ( 1 ) ; * room ( 4 ) ; nmod . in ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; * room ( 4 ) ; nmod . in ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: The mouse in the crate liked a professor on the road .
actual:   * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The girl beside the chair smiled .
actual:   * girl ( 1 ) ; * chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: * girl ( 1 ) ; * chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The girl in a house scoffed .
actual:   * girl ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 4 )
expected: * girl ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 1 )

input: The girl on a tray served the cat a cake .
actual:   * girl ( 1 ) ; tray ( 4 ) ; * cat ( 7 ) ; cake ( 9 ) ; nmod . on ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
expected: * girl ( 1 ) ; tray ( 4 ) ; * cat ( 7 ) ; cake ( 9 ) ; nmod . on ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )

input: A doctor beside the stage grew a box beside the table .
actual:   doctor ( 1 ) ; * stage ( 4 ) ; box ( 7 ) ; * table ( 10 ) ; nmod . beside ( 1 , 4 ) AND grow ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: doctor ( 1 ) ; * stage ( 4 ) ; box ( 7 ) ; * table ( 10 ) ; nmod . beside ( 1 , 4 ) AND grow ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A puppy in the car juggled .
actual:   puppy ( 1 ) ; * car ( 4 ) ; nmod . in ( 1 , 4 ) AND juggle ( 5 ) AND agent ( 5 , 4 )
expected: puppy ( 1 ) ; * car ( 4 ) ; nmod . in ( 1 , 4 ) AND juggle ( 5 ) AND agent ( 5 , 1 )

input: A girl in the car liked a bottle in the house .
actual:   girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A girl in a house sold the cake beside the stage to Emma .
actual:   girl ( 1 ) ; house ( 4 ) ; * cake ( 7 ) ; * stage ( 10 ) ; Emma ( 12 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
expected: girl ( 1 ) ; house ( 4 ) ; * cake ( 7 ) ; * stage ( 10 ) ; Emma ( 12 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )

input: The resident on a computer gave a cake beside a helicopter to the girl .
actual:   * resident ( 1 ) ; computer ( 4 ) ; cake ( 7 ) ; helicopter ( 10 ) ; * girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
expected: * resident ( 1 ) ; computer ( 4 ) ; cake ( 7 ) ; helicopter ( 10 ) ; * girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )

input: The girl in a glass served the boy a balloon .
actual:   * girl ( 1 ) ; glass ( 4 ) ; * boy ( 7 ) ; balloon ( 9 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
expected: * girl ( 1 ) ; glass ( 4 ) ; * boy ( 7 ) ; balloon ( 9 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )

input: A girl in the house gave the host a bat beside the pepper .
actual:   girl ( 1 ) ; * house ( 4 ) ; * host ( 7 ) ; bat ( 9 ) ; * pepper ( 12 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: girl ( 1 ) ; * house ( 4 ) ; * host ( 7 ) ; bat ( 9 ) ; * pepper ( 12 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: A girl in a container gave the brush in the cart to a duke .
actual:   girl ( 1 ) ; container ( 4 ) ; * brush ( 7 ) ; * cart ( 10 ) ; duke ( 13 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; container ( 4 ) ; * brush ( 7 ) ; * cart ( 10 ) ; duke ( 13 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )

input: The dog on a table snored .
actual:   * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 4 )
expected: * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 1 )

input: The girl beside the table rolled the cake beside the tree .
actual:   * girl ( 1 ) ; * table ( 4 ) ; * cake ( 7 ) ; * tree ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; * table ( 4 ) ; * cake ( 7 ) ; * tree ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A girl on the surface cried .
actual:   girl ( 1 ) ; * surface ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; * surface ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: A girl beside the table packed a cake .
actual:   girl ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND pack ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: girl ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND pack ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The boy on the table laughed .
actual:   * boy ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 4 )
expected: * boy ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 1 )

input: The boy in a house froze the sailor in a can .
actual:   * boy ( 1 ) ; house ( 4 ) ; * sailor ( 7 ) ; can ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * boy ( 1 ) ; house ( 4 ) ; * sailor ( 7 ) ; can ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The girl beside a table rented Camila the cake beside the bed .
actual:   * girl ( 1 ) ; table ( 4 ) ; Camila ( 6 ) ; * cake ( 8 ) ; * bed ( 11 ) ; nmod . beside ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
expected: * girl ( 1 ) ; table ( 4 ) ; Camila ( 6 ) ; * cake ( 8 ) ; * bed ( 11 ) ; nmod . beside ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )

input: A creature in the house drew .
actual:   creature ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 4 )
expected: creature ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 1 )

input: The consumer on the bed gave Evelyn a molecule beside the duck .
actual:   * consumer ( 1 ) ; * bed ( 4 ) ; Evelyn ( 6 ) ; molecule ( 8 ) ; * duck ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
expected: * consumer ( 1 ) ; * bed ( 4 ) ; Evelyn ( 6 ) ; molecule ( 8 ) ; * duck ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )

input: A girl on the panel drew .
actual:   girl ( 1 ) ; * panel ( 4 ) ; nmod . on ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; * panel ( 4 ) ; nmod . on ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 1 )

input: A child on the bed poked a brush in the car .
actual:   child ( 1 ) ; * bed ( 4 ) ; brush ( 7 ) ; * car ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: child ( 1 ) ; * bed ( 4 ) ; brush ( 7 ) ; * car ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The child beside a chair ate the rose beside a shoe .
actual:   * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The child on a table burned the pizza beside a stage .
actual:   * child ( 1 ) ; table ( 4 ) ; * pizza ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND burn ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * child ( 1 ) ; table ( 4 ) ; * pizza ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND burn ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The dog on a table scoffed .
actual:   * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 4 )
expected: * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 1 )

input: The chicken on a table rented the bean on the log to a girl .
actual:   * chicken ( 1 ) ; table ( 4 ) ; * bean ( 7 ) ; * log ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
expected: * chicken ( 1 ) ; table ( 4 ) ; * bean ( 7 ) ; * log ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )

input: A boy on a bed sent the cat a donut .
actual:   boy ( 1 ) ; bed ( 4 ) ; * cat ( 7 ) ; donut ( 9 ) ; nmod . on ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
expected: boy ( 1 ) ; bed ( 4 ) ; * cat ( 7 ) ; donut ( 9 ) ; nmod . on ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )

input: A pony on a crack fed the guitar beside a broker to the sailor .
actual:   pony ( 1 ) ; crack ( 4 ) ; * guitar ( 7 ) ; broker ( 10 ) ; * sailor ( 13 ) ; nmod . on ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
expected: pony ( 1 ) ; crack ( 4 ) ; * guitar ( 7 ) ; broker ( 10 ) ; * sailor ( 13 ) ; nmod . on ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )

input: A teacher on the table cried .
actual:   teacher ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: teacher ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: A friend beside the table ate .
actual:   friend ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 )
expected: friend ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 )

input: The girl in the tin fed the cake beside a clock to Liam .
actual:   * girl ( 1 ) ; * tin ( 4 ) ; * cake ( 7 ) ; clock ( 10 ) ; Liam ( 12 ) ; nmod . in ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; * tin ( 4 ) ; * cake ( 7 ) ; clock ( 10 ) ; Liam ( 12 ) ; nmod . in ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )

input: The girl beside a bed crumpled the goose in the basin .
actual:   * girl ( 1 ) ; bed ( 4 ) ; * goose ( 7 ) ; * basin ( 10 ) ; nmod . beside ( 1 , 4 ) AND crumple ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * girl ( 1 ) ; bed ( 4 ) ; * goose ( 7 ) ; * basin ( 10 ) ; nmod . beside ( 1 , 4 ) AND crumple ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The boy on the stage offered the girl a cookie .
actual:   * boy ( 1 ) ; * stage ( 4 ) ; * girl ( 7 ) ; cookie ( 9 ) ; nmod . on ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
expected: * boy ( 1 ) ; * stage ( 4 ) ; * girl ( 7 ) ; cookie ( 9 ) ; nmod . on ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )

input: The girl on the table collapsed the rose on the trampoline .
actual:   * girl ( 1 ) ; * table ( 4 ) ; * rose ( 7 ) ; * trampoline ( 10 ) ; nmod . on ( 1 , 4 ) AND collapse ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * girl ( 1 ) ; * table ( 4 ) ; * rose ( 7 ) ; * trampoline ( 10 ) ; nmod . on ( 1 , 4 ) AND collapse ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A baby in the car offered a cake on a bible to Charlotte .
actual:   baby ( 1 ) ; * car ( 4 ) ; cake ( 7 ) ; bible ( 10 ) ; Charlotte ( 12 ) ; nmod . in ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
expected: baby ( 1 ) ; * car ( 4 ) ; cake ( 7 ) ; bible ( 10 ) ; Charlotte ( 12 ) ; nmod . in ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )

input: A girl beside a stage cooked a cake in the shoe .
actual:   girl ( 1 ) ; stage ( 4 ) ; cake ( 7 ) ; * shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; stage ( 4 ) ; cake ( 7 ) ; * shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The frog on a stage packed .
actual:   * frog ( 1 ) ; stage ( 4 ) ; nmod . on ( 1 , 4 ) AND pack ( 5 ) AND agent ( 5 , 4 )
expected: * frog ( 1 ) ; stage ( 4 ) ; nmod . on ( 1 , 4 ) AND pack ( 5 ) AND agent ( 5 , 1 )

input: A boy on the plate jogged .
actual:   boy ( 1 ) ; * plate ( 4 ) ; nmod . on ( 1 , 4 ) AND jog ( 5 ) AND agent ( 5 , 4 )
expected: boy ( 1 ) ; * plate ( 4 ) ; nmod . on ( 1 , 4 ) AND jog ( 5 ) AND agent ( 5 , 1 )

input: A boy beside a broker lended Emma the melon on the plate .
actual:   boy ( 1 ) ; broker ( 4 ) ; Emma ( 6 ) ; * melon ( 8 ) ; * plate ( 11 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
expected: boy ( 1 ) ; broker ( 4 ) ; Emma ( 6 ) ; * melon ( 8 ) ; * plate ( 11 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )

input: A buyer beside the table rolled the cake in the backpack .
actual:   buyer ( 1 ) ; * table ( 4 ) ; * cake ( 7 ) ; * backpack ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: buyer ( 1 ) ; * table ( 4 ) ; * cake ( 7 ) ; * backpack ( 10 ) ; nmod . beside ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A fish on a leaflet loaned the cat the donut beside the stage .
actual:   fish ( 1 ) ; leaflet ( 4 ) ; * cat ( 7 ) ; * donut ( 9 ) ; * stage ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: fish ( 1 ) ; leaflet ( 4 ) ; * cat ( 7 ) ; * donut ( 9 ) ; * stage ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: A priest on the box admired a cake on the table .
actual:   priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A chicken in a car served a cat a box in the bun .
actual:   chicken ( 1 ) ; car ( 4 ) ; cat ( 7 ) ; box ( 9 ) ; * bun ( 12 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . in ( 9 , 12 )
expected: chicken ( 1 ) ; car ( 4 ) ; cat ( 7 ) ; box ( 9 ) ; * bun ( 12 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . in ( 9 , 12 )

input: The president beside a bed painted a cake .
actual:   * president ( 1 ) ; bed ( 4 ) ; cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * president ( 1 ) ; bed ( 4 ) ; cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A girl on the dog handed a cat the raisin on a table .
actual:   girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )
expected: girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )

input: The boy on a towel gave the frog the cake on a booklet .
actual:   * boy ( 1 ) ; towel ( 4 ) ; * frog ( 7 ) ; * cake ( 9 ) ; booklet ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )
expected: * boy ( 1 ) ; towel ( 4 ) ; * frog ( 7 ) ; * cake ( 9 ) ; booklet ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )

input: The cat beside the stool gave a cake in a cup to a customer .
actual:   * cat ( 1 ) ; * stool ( 4 ) ; cake ( 7 ) ; cup ( 10 ) ; customer ( 13 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )
expected: * cat ( 1 ) ; * stool ( 4 ) ; cake ( 7 ) ; cup ( 10 ) ; customer ( 13 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )

input: The sailor in a house lended a biscuit on a table to a goose .
actual:   * sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
expected: * sailor ( 1 ) ; house ( 4 ) ; biscuit ( 7 ) ; table ( 10 ) ; goose ( 13 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )

input: A bear in the car froze the key on the table .
actual:   bear ( 1 ) ; * car ( 4 ) ; * key ( 7 ) ; * table ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: bear ( 1 ) ; * car ( 4 ) ; * key ( 7 ) ; * table ( 10 ) ; nmod . in ( 1 , 4 ) AND freeze ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The girl beside the bed lended the manager the leaf .
actual:   * girl ( 1 ) ; * bed ( 4 ) ; * manager ( 7 ) ; * leaf ( 9 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
expected: * girl ( 1 ) ; * bed ( 4 ) ; * manager ( 7 ) ; * leaf ( 9 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )

input: A baby on a truck slept .
actual:   baby ( 1 ) ; truck ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: baby ( 1 ) ; truck ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A girl in a hole slept .
actual:   girl ( 1 ) ; hole ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; hole ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The child in a drawer gave Amelia a box beside the machine .
actual:   * child ( 1 ) ; drawer ( 4 ) ; Amelia ( 6 ) ; box ( 8 ) ; * machine ( 11 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
expected: * child ( 1 ) ; drawer ( 4 ) ; Amelia ( 6 ) ; box ( 8 ) ; * machine ( 11 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )

input: A driver beside the bed smiled .
actual:   driver ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: driver ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A scientist on the desk admired the cake beside the chair .
actual:   scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A horse on the cake investigated the melon on a box .
actual:   horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The monster beside a road smiled .
actual:   * monster ( 1 ) ; road ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: * monster ( 1 ) ; road ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The girl in the house liked a cake beside a bed .
actual:   * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The champion beside a table liked a cake on the computer .
actual:   * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The boy in the vase sent the cake on a table to a cat .
actual:   * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
expected: * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )

input: The child on the pad ate the cat .
actual:   * child ( 1 ) ; * pad ( 4 ) ; * cat ( 7 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * child ( 1 ) ; * pad ( 4 ) ; * cat ( 7 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The wolf in the house offered the donut on the dish to Sophia .
actual:   * wolf ( 1 ) ; * house ( 4 ) ; * donut ( 7 ) ; * dish ( 10 ) ; Sophia ( 12 ) ; nmod . in ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
expected: * wolf ( 1 ) ; * house ( 4 ) ; * donut ( 7 ) ; * dish ( 10 ) ; Sophia ( 12 ) ; nmod . in ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )

input: The frog on a mattress ate the radio on the bike .
actual:   * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The cat in a house adored the donut on a stage .
actual:   * cat ( 1 ) ; house ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * cat ( 1 ) ; house ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The girl beside a table slept .
actual:   * girl ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * girl ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The horse on the stack loaned the lollipop on a table to Isaac .
actual:   * horse ( 1 ) ; * stack ( 4 ) ; * lollipop ( 7 ) ; table ( 10 ) ; Isaac ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
expected: * horse ( 1 ) ; * stack ( 4 ) ; * lollipop ( 7 ) ; table ( 10 ) ; Isaac ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )

input: The cat on the table awarded a cake on the stand to Oliver .
actual:   * cat ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * stand ( 10 ) ; Oliver ( 12 ) ; nmod . on ( 1 , 4 ) AND award ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
expected: * cat ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * stand ( 10 ) ; Oliver ( 12 ) ; nmod . on ( 1 , 4 ) AND award ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )

input: The teacher on the table gave Liam a cake on the tripod .
actual:   * teacher ( 1 ) ; * table ( 4 ) ; Liam ( 6 ) ; cake ( 8 ) ; * tripod ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
expected: * teacher ( 1 ) ; * table ( 4 ) ; Liam ( 6 ) ; cake ( 8 ) ; * tripod ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )

input: A girl on a rock smiled .
actual:   girl ( 1 ) ; rock ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; rock ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The fish beside the seat offered the hamburger beside a key to a frog .
actual:   * fish ( 1 ) ; * seat ( 4 ) ; * hamburger ( 7 ) ; key ( 10 ) ; frog ( 13 ) ; nmod . beside ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
expected: * fish ( 1 ) ; * seat ( 4 ) ; * hamburger ( 7 ) ; key ( 10 ) ; frog ( 13 ) ; nmod . beside ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )

input: The frog on the table gave a cake beside the bottle to James .
actual:   * frog ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * bottle ( 10 ) ; James ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
expected: * frog ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * bottle ( 10 ) ; James ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )

input: A boy in the house lended the mouse the cake beside a seat .
actual:   boy ( 1 ) ; * house ( 4 ) ; * mouse ( 7 ) ; * cake ( 9 ) ; seat ( 12 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: boy ( 1 ) ; * house ( 4 ) ; * mouse ( 7 ) ; * cake ( 9 ) ; seat ( 12 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: The frog in a house slept .
actual:   * frog ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * frog ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The dog on the stage ate the boy on a seat .
actual:   * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A bird on a train liked a cake beside a box .
actual:   bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The cat on a boat gave the box on a table to a boy .
actual:   * cat ( 1 ) ; boat ( 4 ) ; * box ( 7 ) ; table ( 10 ) ; boy ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
expected: * cat ( 1 ) ; boat ( 4 ) ; * box ( 7 ) ; table ( 10 ) ; boy ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )

input: A dog in the house liked a cake .
actual:   dog ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: dog ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A girl in a room sent a frog a cake beside the pillar .
actual:   girl ( 1 ) ; room ( 4 ) ; frog ( 7 ) ; cake ( 9 ) ; * pillar ( 12 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: girl ( 1 ) ; room ( 4 ) ; frog ( 7 ) ; cake ( 9 ) ; * pillar ( 12 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: The horse on a bed slept .
actual:   * horse ( 1 ) ; bed ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * horse ( 1 ) ; bed ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The prince in a bin smiled .
actual:   * prince ( 1 ) ; bin ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: * prince ( 1 ) ; bin ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The girl beside a stage lended the cake in the house to Liam .
actual:   * girl ( 1 ) ; stage ( 4 ) ; * cake ( 7 ) ; * house ( 10 ) ; Liam ( 12 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . in ( 7 , 10 )
expected: * girl ( 1 ) ; stage ( 4 ) ; * cake ( 7 ) ; * house ( 10 ) ; Liam ( 12 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . in ( 7 , 10 )

input: The turkey in the storage held a cake beside a table .
actual:   * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The teacher in the trap slept .
actual:   * teacher ( 1 ) ; * trap ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * teacher ( 1 ) ; * trap ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The chicken on the table poked the child in a cup .
actual:   * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A girl in a envelope sold Liam the cake beside the computer .
actual:   girl ( 1 ) ; envelope ( 4 ) ; Liam ( 6 ) ; * cake ( 8 ) ; * computer ( 11 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
expected: girl ( 1 ) ; envelope ( 4 ) ; Liam ( 6 ) ; * cake ( 8 ) ; * computer ( 11 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )

input: The monkey on the futon gave the cat a pretzel .
actual:   * monkey ( 1 ) ; * futon ( 4 ) ; * cat ( 7 ) ; pretzel ( 9 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
expected: * monkey ( 1 ) ; * futon ( 4 ) ; * cat ( 7 ) ; pretzel ( 9 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )

input: A dog in the wardrobe smiled .
actual:   dog ( 1 ) ; * wardrobe ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: dog ( 1 ) ; * wardrobe ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A girl on the table ate the ball in a cafe .
actual:   girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The girl in the taxi slept .
actual:   * girl ( 1 ) ; * taxi ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * girl ( 1 ) ; * taxi ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The dog on a chair ate a jigsaw on the paper .
actual:   * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl on a table smiled .
actual:   girl ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A deer beside the table gave Emma a sweetcorn in the garden .
actual:   deer ( 1 ) ; * table ( 4 ) ; Emma ( 6 ) ; sweetcorn ( 8 ) ; * garden ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . in ( 8 , 11 )
expected: deer ( 1 ) ; * table ( 4 ) ; Emma ( 6 ) ; sweetcorn ( 8 ) ; * garden ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . in ( 8 , 11 )

input: A child beside a stage gave Emma a donut beside the house .
actual:   child ( 1 ) ; stage ( 4 ) ; Emma ( 6 ) ; donut ( 8 ) ; * house ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
expected: child ( 1 ) ; stage ( 4 ) ; Emma ( 6 ) ; donut ( 8 ) ; * house ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )

input: A professor beside the bed smiled .
actual:   professor ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: professor ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The cat on the canvas gave the glue beside a table to a girl .
actual:   * cat ( 1 ) ; * canvas ( 4 ) ; * glue ( 7 ) ; table ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
expected: * cat ( 1 ) ; * canvas ( 4 ) ; * glue ( 7 ) ; table ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )

input: A girl beside the table saw the cat in a car .
actual:   girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The teacher in a house awarded a cookie beside a seat to the bee .
actual:   * teacher ( 1 ) ; house ( 4 ) ; cookie ( 7 ) ; seat ( 10 ) ; * bee ( 13 ) ; nmod . in ( 1 , 4 ) AND award ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
expected: * teacher ( 1 ) ; house ( 4 ) ; cookie ( 7 ) ; seat ( 10 ) ; * bee ( 13 ) ; nmod . in ( 1 , 4 ) AND award ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )

input: The cat on the tabletop sold the princess a cake beside a monkey .
actual:   * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: A girl beside a sword ate a fruit in the house .
actual:   girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The citizen beside the duck adored the drink .
actual:   * citizen ( 1 ) ; * duck ( 4 ) ; * drink ( 7 ) ; nmod . beside ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * citizen ( 1 ) ; * duck ( 4 ) ; * drink ( 7 ) ; nmod . beside ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The boy beside a bed gave Audrey a cake on the pedestal .
actual:   * boy ( 1 ) ; bed ( 4 ) ; Audrey ( 6 ) ; cake ( 8 ) ; * pedestal ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
expected: * boy ( 1 ) ; bed ( 4 ) ; Audrey ( 6 ) ; cake ( 8 ) ; * pedestal ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )

input: The mouse in the crate liked a professor on the road .
actual:   * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The girl beside the chair smiled .
actual:   * girl ( 1 ) ; * chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: * girl ( 1 ) ; * chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The girl in a house scoffed .
actual:   * girl ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 4 )
expected: * girl ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 1 )

input: A girl in the car liked a bottle in the house .
actual:   girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A girl in a house sold the cake beside the stage to Emma .
actual:   girl ( 1 ) ; house ( 4 ) ; * cake ( 7 ) ; * stage ( 10 ) ; Emma ( 12 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
expected: girl ( 1 ) ; house ( 4 ) ; * cake ( 7 ) ; * stage ( 10 ) ; Emma ( 12 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )

input: The resident on a computer gave a cake beside a helicopter to the girl .
actual:   * resident ( 1 ) ; computer ( 4 ) ; cake ( 7 ) ; helicopter ( 10 ) ; * girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
expected: * resident ( 1 ) ; computer ( 4 ) ; cake ( 7 ) ; helicopter ( 10 ) ; * girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )

input: A girl in the house gave the host a bat beside the pepper .
actual:   girl ( 1 ) ; * house ( 4 ) ; * host ( 7 ) ; bat ( 9 ) ; * pepper ( 12 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: girl ( 1 ) ; * house ( 4 ) ; * host ( 7 ) ; bat ( 9 ) ; * pepper ( 12 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: A girl in a container gave the brush in the cart to a duke .
actual:   girl ( 1 ) ; container ( 4 ) ; * brush ( 7 ) ; * cart ( 10 ) ; duke ( 13 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; container ( 4 ) ; * brush ( 7 ) ; * cart ( 10 ) ; duke ( 13 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )

input: The boy on the table laughed .
actual:   * boy ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 4 )
expected: * boy ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 1 )

input: The consumer on the bed gave Evelyn a molecule beside the duck .
actual:   * consumer ( 1 ) ; * bed ( 4 ) ; Evelyn ( 6 ) ; molecule ( 8 ) ; * duck ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
expected: * consumer ( 1 ) ; * bed ( 4 ) ; Evelyn ( 6 ) ; molecule ( 8 ) ; * duck ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )

input: A boy beside a chair laughed .
actual:   boy ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 4 )
expected: boy ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 1 )

input: A child on the bed poked a brush in the car .
actual:   child ( 1 ) ; * bed ( 4 ) ; brush ( 7 ) ; * car ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: child ( 1 ) ; * bed ( 4 ) ; brush ( 7 ) ; * car ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The child beside a chair ate the rose beside a shoe .
actual:   * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The dog on a table scoffed .
actual:   * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 4 )
expected: * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 1 )

input: The chicken on a table rented the bean on the log to a girl .
actual:   * chicken ( 1 ) ; table ( 4 ) ; * bean ( 7 ) ; * log ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
expected: * chicken ( 1 ) ; table ( 4 ) ; * bean ( 7 ) ; * log ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )

input: A pony on a crack fed the guitar beside a broker to the sailor .
actual:   pony ( 1 ) ; crack ( 4 ) ; * guitar ( 7 ) ; broker ( 10 ) ; * sailor ( 13 ) ; nmod . on ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
expected: pony ( 1 ) ; crack ( 4 ) ; * guitar ( 7 ) ; broker ( 10 ) ; * sailor ( 13 ) ; nmod . on ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )

input: A teacher beside a table danced .
actual:   teacher ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 4 )
expected: teacher ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 1 )

input: The girl in the tin fed the cake beside a clock to Liam .
actual:   * girl ( 1 ) ; * tin ( 4 ) ; * cake ( 7 ) ; clock ( 10 ) ; Liam ( 12 ) ; nmod . in ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; * tin ( 4 ) ; * cake ( 7 ) ; clock ( 10 ) ; Liam ( 12 ) ; nmod . in ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )

input: A child in a car smiled .
actual:   child ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: child ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A baby in the car offered a cake on a bible to Charlotte .
actual:   baby ( 1 ) ; * car ( 4 ) ; cake ( 7 ) ; bible ( 10 ) ; Charlotte ( 12 ) ; nmod . in ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
expected: baby ( 1 ) ; * car ( 4 ) ; cake ( 7 ) ; bible ( 10 ) ; Charlotte ( 12 ) ; nmod . in ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )

input: The mouse on a table gave the donut in the nest to a cat .
actual:   * mouse ( 1 ) ; table ( 4 ) ; * donut ( 7 ) ; * nest ( 10 ) ; cat ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )
expected: * mouse ( 1 ) ; table ( 4 ) ; * donut ( 7 ) ; * nest ( 10 ) ; cat ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )

input: A boy beside a broker lended Emma the melon on the plate .
actual:   boy ( 1 ) ; broker ( 4 ) ; Emma ( 6 ) ; * melon ( 8 ) ; * plate ( 11 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
expected: boy ( 1 ) ; broker ( 4 ) ; Emma ( 6 ) ; * melon ( 8 ) ; * plate ( 11 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )

input: A fish on a leaflet loaned the cat the donut beside the stage .
actual:   fish ( 1 ) ; leaflet ( 4 ) ; * cat ( 7 ) ; * donut ( 9 ) ; * stage ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: fish ( 1 ) ; leaflet ( 4 ) ; * cat ( 7 ) ; * donut ( 9 ) ; * stage ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: A girl on the dog handed a cat the raisin on a table .
actual:   girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )
expected: girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )

input: The boy on a towel gave the frog the cake on a booklet .
actual:   * boy ( 1 ) ; towel ( 4 ) ; * frog ( 7 ) ; * cake ( 9 ) ; booklet ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )
expected: * boy ( 1 ) ; towel ( 4 ) ; * frog ( 7 ) ; * cake ( 9 ) ; booklet ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )

input: The cat beside the stool gave a cake in a cup to a customer .
actual:   * cat ( 1 ) ; * stool ( 4 ) ; cake ( 7 ) ; cup ( 10 ) ; customer ( 13 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )
expected: * cat ( 1 ) ; * stool ( 4 ) ; cake ( 7 ) ; cup ( 10 ) ; customer ( 13 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )

input: A host beside a table smiled .
actual:   host ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: host ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The cat on the hanger rented the box to a child .
actual:   * cat ( 1 ) ; * hanger ( 4 ) ; * box ( 7 ) ; child ( 10 ) ; nmod . on ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 10 )
expected: * cat ( 1 ) ; * hanger ( 4 ) ; * box ( 7 ) ; child ( 10 ) ; nmod . on ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 10 )

input: The spokesman in the house served Emma the rose .
actual:   * spokesman ( 1 ) ; * house ( 4 ) ; Emma ( 6 ) ; * rose ( 8 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )
expected: * spokesman ( 1 ) ; * house ( 4 ) ; Emma ( 6 ) ; * rose ( 8 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )

input: A girl on the stool on the table drew a frog .
actual:   girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 7 ) AND theme ( 8 , 10 )
expected: girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )

input: A girl in the house slept .
actual:   girl ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The girl beside the bed lended the manager the leaf .
actual:   * girl ( 1 ) ; * bed ( 4 ) ; * manager ( 7 ) ; * leaf ( 9 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
expected: * girl ( 1 ) ; * bed ( 4 ) ; * manager ( 7 ) ; * leaf ( 9 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )

input: The baby beside a valve painted the cake .
actual:   * baby ( 1 ) ; valve ( 4 ) ; * cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * baby ( 1 ) ; valve ( 4 ) ; * cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A girl in a hole slept .
actual:   girl ( 1 ) ; hole ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; hole ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A baby in a garden called the raisin .
actual:   baby ( 1 ) ; garden ( 4 ) ; * raisin ( 7 ) ; nmod . in ( 1 , 4 ) AND call ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: baby ( 1 ) ; garden ( 4 ) ; * raisin ( 7 ) ; nmod . in ( 1 , 4 ) AND call ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The child in a drawer gave Amelia a box beside the machine .
actual:   * child ( 1 ) ; drawer ( 4 ) ; Amelia ( 6 ) ; box ( 8 ) ; * machine ( 11 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
expected: * child ( 1 ) ; drawer ( 4 ) ; Amelia ( 6 ) ; box ( 8 ) ; * machine ( 11 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )

input: The professor beside a table appreciated the key in a room .
actual:   * professor ( 1 ) ; table ( 4 ) ; * key ( 7 ) ; room ( 10 ) ; nmod . beside ( 1 , 4 ) AND appreciate ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * professor ( 1 ) ; table ( 4 ) ; * key ( 7 ) ; room ( 10 ) ; nmod . beside ( 1 , 4 ) AND appreciate ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A cat on a bag cleaned a chemical in a house .
actual:   cat ( 1 ) ; bag ( 4 ) ; chemical ( 7 ) ; house ( 10 ) ; nmod . on ( 1 , 4 ) AND clean ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: cat ( 1 ) ; bag ( 4 ) ; chemical ( 7 ) ; house ( 10 ) ; nmod . on ( 1 , 4 ) AND clean ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A frog beside the table cried .
actual:   frog ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: frog ( 1 ) ; * table ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: A girl beside a rock passed Dylan a pen on a box .
actual:   girl ( 1 ) ; rock ( 4 ) ; Dylan ( 6 ) ; pen ( 8 ) ; box ( 11 ) ; nmod . beside ( 1 , 4 ) AND pass ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
expected: girl ( 1 ) ; rock ( 4 ) ; Dylan ( 6 ) ; pen ( 8 ) ; box ( 11 ) ; nmod . beside ( 1 , 4 ) AND pass ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )

input: A driver beside the bed smiled .
actual:   driver ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: driver ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A scientist on the desk admired the cake beside the chair .
actual:   scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: scientist ( 1 ) ; * desk ( 4 ) ; * cake ( 7 ) ; * chair ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A bear beside a chair napped .
actual:   bear ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND nap ( 5 ) AND agent ( 5 , 4 )
expected: bear ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND nap ( 5 ) AND agent ( 5 , 1 )

input: A horse on the cake investigated the melon on a box .
actual:   horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: horse ( 1 ) ; * cake ( 4 ) ; * melon ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND investigate ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The monster beside a road smiled .
actual:   * monster ( 1 ) ; road ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: * monster ( 1 ) ; road ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The girl beside the table dusted the baby .
actual:   * girl ( 1 ) ; * table ( 4 ) ; * baby ( 7 ) ; nmod . beside ( 1 , 4 ) AND dust ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * girl ( 1 ) ; * table ( 4 ) ; * baby ( 7 ) ; nmod . beside ( 1 , 4 ) AND dust ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The girl in the house liked a cake beside a bed .
actual:   * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; * house ( 4 ) ; cake ( 7 ) ; bed ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A girl in the house forwarded Victoria a gumball in the shoe .
actual:   girl ( 1 ) ; * house ( 4 ) ; Victoria ( 6 ) ; gumball ( 8 ) ; * shoe ( 11 ) ; nmod . in ( 1 , 4 ) AND forward ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . in ( 8 , 11 )
expected: girl ( 1 ) ; * house ( 4 ) ; Victoria ( 6 ) ; gumball ( 8 ) ; * shoe ( 11 ) ; nmod . in ( 1 , 4 ) AND forward ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . in ( 8 , 11 )

input: A boy in the trailer poked the girl beside a table .
actual:   boy ( 1 ) ; * trailer ( 4 ) ; * girl ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: boy ( 1 ) ; * trailer ( 4 ) ; * girl ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The champion beside a table liked a cake on the computer .
actual:   * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * champion ( 1 ) ; table ( 4 ) ; cake ( 7 ) ; * computer ( 10 ) ; nmod . beside ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The boy in the vase sent the cake on a table to a cat .
actual:   * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
expected: * boy ( 1 ) ; * vase ( 4 ) ; * cake ( 7 ) ; table ( 10 ) ; cat ( 13 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )

input: The puppy on the seat poked the boy .
actual:   * puppy ( 1 ) ; * seat ( 4 ) ; * boy ( 7 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * puppy ( 1 ) ; * seat ( 4 ) ; * boy ( 7 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The child on the pad ate the cat .
actual:   * child ( 1 ) ; * pad ( 4 ) ; * cat ( 7 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * child ( 1 ) ; * pad ( 4 ) ; * cat ( 7 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A student in a pot liked the girl on a chair .
actual:   student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A cat on a sofa slept .
actual:   cat ( 1 ) ; sofa ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: cat ( 1 ) ; sofa ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A baby on the chair saw the bear .
actual:   baby ( 1 ) ; * chair ( 4 ) ; * bear ( 7 ) ; nmod . on ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: baby ( 1 ) ; * chair ( 4 ) ; * bear ( 7 ) ; nmod . on ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The frog on a mattress ate the radio on the bike .
actual:   * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * frog ( 1 ) ; mattress ( 4 ) ; * radio ( 7 ) ; * bike ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The penguin in the drawer rolled the donut beside the computer .
actual:   * penguin ( 1 ) ; * drawer ( 4 ) ; * donut ( 7 ) ; * computer ( 10 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * penguin ( 1 ) ; * drawer ( 4 ) ; * donut ( 7 ) ; * computer ( 10 ) ; nmod . in ( 1 , 4 ) AND roll ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A deer beside the house slept .
actual:   deer ( 1 ) ; * house ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: deer ( 1 ) ; * house ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The cat in a house adored the donut on a stage .
actual:   * cat ( 1 ) ; house ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * cat ( 1 ) ; house ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The dog on the platter beside a stage slept .
actual:   * dog ( 1 ) ; * platter ( 4 ) ; stage ( 7 ) ; nmod . on ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND sleep ( 8 ) AND agent ( 8 , 7 )
expected: * dog ( 1 ) ; * platter ( 4 ) ; stage ( 7 ) ; nmod . on ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND sleep ( 8 ) AND agent ( 8 , 1 )

input: The girl beside a table slept .
actual:   * girl ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * girl ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The horse on the stack loaned the lollipop on a table to Isaac .
actual:   * horse ( 1 ) ; * stack ( 4 ) ; * lollipop ( 7 ) ; table ( 10 ) ; Isaac ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
expected: * horse ( 1 ) ; * stack ( 4 ) ; * lollipop ( 7 ) ; table ( 10 ) ; Isaac ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )

input: The cat on the table awarded a cake on the stand to Oliver .
actual:   * cat ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * stand ( 10 ) ; Oliver ( 12 ) ; nmod . on ( 1 , 4 ) AND award ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
expected: * cat ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * stand ( 10 ) ; Oliver ( 12 ) ; nmod . on ( 1 , 4 ) AND award ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )

input: The teacher on the table gave Liam a cake on the tripod .
actual:   * teacher ( 1 ) ; * table ( 4 ) ; Liam ( 6 ) ; cake ( 8 ) ; * tripod ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
expected: * teacher ( 1 ) ; * table ( 4 ) ; Liam ( 6 ) ; cake ( 8 ) ; * tripod ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )

input: A girl on a rock smiled .
actual:   girl ( 1 ) ; rock ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; rock ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The girl beside the stage found the banana in a bucket .
actual:   * girl ( 1 ) ; * stage ( 4 ) ; * banana ( 7 ) ; bucket ( 10 ) ; nmod . beside ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * girl ( 1 ) ; * stage ( 4 ) ; * banana ( 7 ) ; bucket ( 10 ) ; nmod . beside ( 1 , 4 ) AND find ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The fish beside the seat offered the hamburger beside a key to a frog .
actual:   * fish ( 1 ) ; * seat ( 4 ) ; * hamburger ( 7 ) ; key ( 10 ) ; frog ( 13 ) ; nmod . beside ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
expected: * fish ( 1 ) ; * seat ( 4 ) ; * hamburger ( 7 ) ; key ( 10 ) ; frog ( 13 ) ; nmod . beside ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )

input: The frog on the table gave a cake beside the bottle to James .
actual:   * frog ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * bottle ( 10 ) ; James ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
expected: * frog ( 1 ) ; * table ( 4 ) ; cake ( 7 ) ; * bottle ( 10 ) ; James ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )

input: A boy on a plate sketched a chicken .
actual:   boy ( 1 ) ; plate ( 4 ) ; chicken ( 7 ) ; nmod . on ( 1 , 4 ) AND sketch ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: boy ( 1 ) ; plate ( 4 ) ; chicken ( 7 ) ; nmod . on ( 1 , 4 ) AND sketch ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A boy in the house lended the mouse the cake beside a seat .
actual:   boy ( 1 ) ; * house ( 4 ) ; * mouse ( 7 ) ; * cake ( 9 ) ; seat ( 12 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: boy ( 1 ) ; * house ( 4 ) ; * mouse ( 7 ) ; * cake ( 9 ) ; seat ( 12 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: The director on a bed on the machine lended a farmer the sandwich .
actual:   * director ( 1 ) ; bed ( 4 ) ; * machine ( 7 ) ; farmer ( 10 ) ; * sandwich ( 12 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND lend ( 8 ) AND agent ( 8 , 7 ) AND recipient ( 8 , 10 ) AND theme ( 8 , 12 )
expected: * director ( 1 ) ; bed ( 4 ) ; * machine ( 7 ) ; farmer ( 10 ) ; * sandwich ( 12 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND lend ( 8 ) AND agent ( 8 , 1 ) AND recipient ( 8 , 10 ) AND theme ( 8 , 12 )

input: The frog in a house slept .
actual:   * frog ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * frog ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The dog on the stage ate the boy on a seat .
actual:   * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The baby in the house promised the donut to the cat .
actual:   * baby ( 1 ) ; * house ( 4 ) ; * donut ( 7 ) ; * cat ( 10 ) ; nmod . in ( 1 , 4 ) AND promise ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 10 )
expected: * baby ( 1 ) ; * house ( 4 ) ; * donut ( 7 ) ; * cat ( 10 ) ; nmod . in ( 1 , 4 ) AND promise ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 10 )

input: A bird on a train liked a cake beside a box .
actual:   bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: bird ( 1 ) ; train ( 4 ) ; cake ( 7 ) ; box ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The girl on a booklet walked .
actual:   * girl ( 1 ) ; booklet ( 4 ) ; nmod . on ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 4 )
expected: * girl ( 1 ) ; booklet ( 4 ) ; nmod . on ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 1 )

input: The cat on a boat gave the box on a table to a boy .
actual:   * cat ( 1 ) ; boat ( 4 ) ; * box ( 7 ) ; table ( 10 ) ; boy ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
expected: * cat ( 1 ) ; boat ( 4 ) ; * box ( 7 ) ; table ( 10 ) ; boy ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )

input: A girl in a room sent a frog a cake beside the pillar .
actual:   girl ( 1 ) ; room ( 4 ) ; frog ( 7 ) ; cake ( 9 ) ; * pillar ( 12 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: girl ( 1 ) ; room ( 4 ) ; frog ( 7 ) ; cake ( 9 ) ; * pillar ( 12 ) ; nmod . in ( 1 , 4 ) AND send ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: The girl on a tree offered the boy the banana beside a table .
actual:   * girl ( 1 ) ; tree ( 4 ) ; * boy ( 7 ) ; * banana ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: * girl ( 1 ) ; tree ( 4 ) ; * boy ( 7 ) ; * banana ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: The horse on a bed slept .
actual:   * horse ( 1 ) ; bed ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * horse ( 1 ) ; bed ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The prince in a bin smiled .
actual:   * prince ( 1 ) ; bin ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: * prince ( 1 ) ; bin ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The girl beside a stage lended the cake in the house to Liam .
actual:   * girl ( 1 ) ; stage ( 4 ) ; * cake ( 7 ) ; * house ( 10 ) ; Liam ( 12 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . in ( 7 , 10 )
expected: * girl ( 1 ) ; stage ( 4 ) ; * cake ( 7 ) ; * house ( 10 ) ; Liam ( 12 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . in ( 7 , 10 )

input: A bear on the seat discovered a boy beside a stage .
actual:   bear ( 1 ) ; * seat ( 4 ) ; boy ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND discover ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: bear ( 1 ) ; * seat ( 4 ) ; boy ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND discover ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The turkey in the storage held a cake beside a table .
actual:   * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * turkey ( 1 ) ; * storage ( 4 ) ; cake ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND hold ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The girl in a box liked the donut beside a stage .
actual:   * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; box ( 4 ) ; * donut ( 7 ) ; stage ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The teacher in the trap slept .
actual:   * teacher ( 1 ) ; * trap ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * teacher ( 1 ) ; * trap ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The chicken on the table poked the child in a cup .
actual:   * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A girl on the corpse in a glass admired a teacher .
actual:   girl ( 1 ) ; * corpse ( 4 ) ; glass ( 7 ) ; teacher ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND admire ( 8 ) AND agent ( 8 , 7 ) AND theme ( 8 , 10 )
expected: girl ( 1 ) ; * corpse ( 4 ) ; glass ( 7 ) ; teacher ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . in ( 4 , 7 ) AND admire ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )

input: A girl in a envelope sold Liam the cake beside the computer .
actual:   girl ( 1 ) ; envelope ( 4 ) ; Liam ( 6 ) ; * cake ( 8 ) ; * computer ( 11 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
expected: girl ( 1 ) ; envelope ( 4 ) ; Liam ( 6 ) ; * cake ( 8 ) ; * computer ( 11 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )

input: A girl beside the table gave a mouse a mirror in the saucepan .
actual:   girl ( 1 ) ; * table ( 4 ) ; mouse ( 7 ) ; mirror ( 9 ) ; * saucepan ( 12 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . in ( 9 , 12 )
expected: girl ( 1 ) ; * table ( 4 ) ; mouse ( 7 ) ; mirror ( 9 ) ; * saucepan ( 12 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . in ( 9 , 12 )

input: The monkey on the futon gave the cat a pretzel .
actual:   * monkey ( 1 ) ; * futon ( 4 ) ; * cat ( 7 ) ; pretzel ( 9 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
expected: * monkey ( 1 ) ; * futon ( 4 ) ; * cat ( 7 ) ; pretzel ( 9 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )

input: A boy in the haystack slept .
actual:   boy ( 1 ) ; * haystack ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: boy ( 1 ) ; * haystack ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A dog in the wardrobe smiled .
actual:   dog ( 1 ) ; * wardrobe ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: dog ( 1 ) ; * wardrobe ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A girl on the table ate the ball in a cafe .
actual:   girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * table ( 4 ) ; * ball ( 7 ) ; cafe ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The girl in the taxi slept .
actual:   * girl ( 1 ) ; * taxi ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * girl ( 1 ) ; * taxi ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A child on a table gave Scarlett a balloon beside a lemon .
actual:   child ( 1 ) ; table ( 4 ) ; Scarlett ( 6 ) ; balloon ( 8 ) ; lemon ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
expected: child ( 1 ) ; table ( 4 ) ; Scarlett ( 6 ) ; balloon ( 8 ) ; lemon ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )

input: The dog on a chair ate a jigsaw on the paper .
actual:   * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * dog ( 1 ) ; chair ( 4 ) ; jigsaw ( 7 ) ; * paper ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl on a table smiled .
actual:   girl ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: A deer beside the table gave Emma a sweetcorn in the garden .
actual:   deer ( 1 ) ; * table ( 4 ) ; Emma ( 6 ) ; sweetcorn ( 8 ) ; * garden ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . in ( 8 , 11 )
expected: deer ( 1 ) ; * table ( 4 ) ; Emma ( 6 ) ; sweetcorn ( 8 ) ; * garden ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . in ( 8 , 11 )

input: A tiger on a bible slept .
actual:   tiger ( 1 ) ; bible ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: tiger ( 1 ) ; bible ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The girl beside the road cried .
actual:   * girl ( 1 ) ; * road ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: * girl ( 1 ) ; * road ( 4 ) ; nmod . beside ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: A child beside a stage gave Emma a donut beside the house .
actual:   child ( 1 ) ; stage ( 4 ) ; Emma ( 6 ) ; donut ( 8 ) ; * house ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
expected: child ( 1 ) ; stage ( 4 ) ; Emma ( 6 ) ; donut ( 8 ) ; * house ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )

input: A professor beside the bed smiled .
actual:   professor ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: professor ( 1 ) ; * bed ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The cat on the canvas gave the glue beside a table to a girl .
actual:   * cat ( 1 ) ; * canvas ( 4 ) ; * glue ( 7 ) ; table ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
expected: * cat ( 1 ) ; * canvas ( 4 ) ; * glue ( 7 ) ; table ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )

input: A girl beside the table saw the cat in a car .
actual:   girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * table ( 4 ) ; * cat ( 7 ) ; car ( 10 ) ; nmod . beside ( 1 , 4 ) AND see ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The cat on the tabletop sold the princess a cake beside a monkey .
actual:   * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: * cat ( 1 ) ; * tabletop ( 4 ) ; * princess ( 7 ) ; cake ( 9 ) ; monkey ( 12 ) ; nmod . on ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: A girl beside a sword ate a fruit in the house .
actual:   girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; sword ( 4 ) ; fruit ( 7 ) ; * house ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The citizen beside the duck adored the drink .
actual:   * citizen ( 1 ) ; * duck ( 4 ) ; * drink ( 7 ) ; nmod . beside ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * citizen ( 1 ) ; * duck ( 4 ) ; * drink ( 7 ) ; nmod . beside ( 1 , 4 ) AND adore ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: A girl on the rock walked .
actual:   girl ( 1 ) ; * rock ( 4 ) ; nmod . on ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; * rock ( 4 ) ; nmod . on ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 1 )

input: The girl in the house beside a cage dusted a ball .
actual:   * girl ( 1 ) ; * house ( 4 ) ; cage ( 7 ) ; ball ( 10 ) ; nmod . in ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND dust ( 8 ) AND agent ( 8 , 7 ) AND theme ( 8 , 10 )
expected: * girl ( 1 ) ; * house ( 4 ) ; cage ( 7 ) ; ball ( 10 ) ; nmod . in ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND dust ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )

input: The boy beside a bed gave Audrey a cake on the pedestal .
actual:   * boy ( 1 ) ; bed ( 4 ) ; Audrey ( 6 ) ; cake ( 8 ) ; * pedestal ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
expected: * boy ( 1 ) ; bed ( 4 ) ; Audrey ( 6 ) ; cake ( 8 ) ; * pedestal ( 11 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )

input: The girl on a table liked a journalist on a stage .
actual:   * girl ( 1 ) ; table ( 4 ) ; journalist ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * girl ( 1 ) ; table ( 4 ) ; journalist ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl in the room cried .
actual:   girl ( 1 ) ; * room ( 4 ) ; nmod . in ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; * room ( 4 ) ; nmod . in ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: The mouse in the crate liked a professor on the road .
actual:   * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A creature in a house beside the book slept .
actual:   creature ( 1 ) ; house ( 4 ) ; * book ( 7 ) ; nmod . in ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND sleep ( 8 ) AND agent ( 8 , 7 )
expected: creature ( 1 ) ; house ( 4 ) ; * book ( 7 ) ; nmod . in ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND sleep ( 8 ) AND agent ( 8 , 1 )

input: The girl beside the chair smiled .
actual:   * girl ( 1 ) ; * chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: * girl ( 1 ) ; * chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The girl on a tray served the cat a cake .
actual:   * girl ( 1 ) ; tray ( 4 ) ; * cat ( 7 ) ; cake ( 9 ) ; nmod . on ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
expected: * girl ( 1 ) ; tray ( 4 ) ; * cat ( 7 ) ; cake ( 9 ) ; nmod . on ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )

input: A girl in the car liked a bottle in the house .
actual:   girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; * car ( 4 ) ; bottle ( 7 ) ; * house ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: A girl in a house sold the cake beside the stage to Emma .
actual:   girl ( 1 ) ; house ( 4 ) ; * cake ( 7 ) ; * stage ( 10 ) ; Emma ( 12 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
expected: girl ( 1 ) ; house ( 4 ) ; * cake ( 7 ) ; * stage ( 10 ) ; Emma ( 12 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )

input: The resident on a computer gave a cake beside a helicopter to the girl .
actual:   * resident ( 1 ) ; computer ( 4 ) ; cake ( 7 ) ; helicopter ( 10 ) ; * girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
expected: * resident ( 1 ) ; computer ( 4 ) ; cake ( 7 ) ; helicopter ( 10 ) ; * girl ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )

input: A girl in the swamp painted the glue .
actual:   girl ( 1 ) ; * swamp ( 4 ) ; * glue ( 7 ) ; nmod . in ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: girl ( 1 ) ; * swamp ( 4 ) ; * glue ( 7 ) ; nmod . in ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

input: The girl in a glass served the boy a balloon .
actual:   * girl ( 1 ) ; glass ( 4 ) ; * boy ( 7 ) ; balloon ( 9 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
expected: * girl ( 1 ) ; glass ( 4 ) ; * boy ( 7 ) ; balloon ( 9 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )

input: A girl in the house gave the host a bat beside the pepper .
actual:   girl ( 1 ) ; * house ( 4 ) ; * host ( 7 ) ; bat ( 9 ) ; * pepper ( 12 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: girl ( 1 ) ; * house ( 4 ) ; * host ( 7 ) ; bat ( 9 ) ; * pepper ( 12 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: A girl in a container gave the brush in the cart to a duke .
actual:   girl ( 1 ) ; container ( 4 ) ; * brush ( 7 ) ; * cart ( 10 ) ; duke ( 13 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; container ( 4 ) ; * brush ( 7 ) ; * cart ( 10 ) ; duke ( 13 ) ; nmod . in ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )

input: A girl on the surface cried .
actual:   girl ( 1 ) ; * surface ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; * surface ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: The boy beside the whale slept .
actual:   * boy ( 1 ) ; * whale ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * boy ( 1 ) ; * whale ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The boy on the table laughed .
actual:   * boy ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 4 )
expected: * boy ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND laugh ( 5 ) AND agent ( 5 , 1 )

input: The girl beside a table rented Camila the cake beside the bed .
actual:   * girl ( 1 ) ; table ( 4 ) ; Camila ( 6 ) ; * cake ( 8 ) ; * bed ( 11 ) ; nmod . beside ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
expected: * girl ( 1 ) ; table ( 4 ) ; Camila ( 6 ) ; * cake ( 8 ) ; * bed ( 11 ) ; nmod . beside ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )

input: The consumer on the bed gave Evelyn a molecule beside the duck .
actual:   * consumer ( 1 ) ; * bed ( 4 ) ; Evelyn ( 6 ) ; molecule ( 8 ) ; * duck ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
expected: * consumer ( 1 ) ; * bed ( 4 ) ; Evelyn ( 6 ) ; molecule ( 8 ) ; * duck ( 11 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )

input: The lion beside a piano gave the girl the donut .
actual:   * lion ( 1 ) ; piano ( 4 ) ; * girl ( 7 ) ; * donut ( 9 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
expected: * lion ( 1 ) ; piano ( 4 ) ; * girl ( 7 ) ; * donut ( 9 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )

input: A child on the bed poked a brush in the car .
actual:   child ( 1 ) ; * bed ( 4 ) ; brush ( 7 ) ; * car ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: child ( 1 ) ; * bed ( 4 ) ; brush ( 7 ) ; * car ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The girl in the cart drew Emma .
actual:   * girl ( 1 ) ; * cart ( 4 ) ; Emma ( 6 ) ; nmod . in ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 6 )
expected: * girl ( 1 ) ; * cart ( 4 ) ; Emma ( 6 ) ; nmod . in ( 1 , 4 ) AND draw ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 6 )

input: The child beside a chair ate the rose beside a shoe .
actual:   * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: * child ( 1 ) ; chair ( 4 ) ; * rose ( 7 ) ; shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: The chicken on a table rented the bean on the log to a girl .
actual:   * chicken ( 1 ) ; table ( 4 ) ; * bean ( 7 ) ; * log ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )
expected: * chicken ( 1 ) ; table ( 4 ) ; * bean ( 7 ) ; * log ( 10 ) ; girl ( 13 ) ; nmod . on ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . on ( 7 , 10 )

input: A pony on a crack fed the guitar beside a broker to the sailor .
actual:   pony ( 1 ) ; crack ( 4 ) ; * guitar ( 7 ) ; broker ( 10 ) ; * sailor ( 13 ) ; nmod . on ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )
expected: pony ( 1 ) ; crack ( 4 ) ; * guitar ( 7 ) ; broker ( 10 ) ; * sailor ( 13 ) ; nmod . on ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . beside ( 7 , 10 )

input: A teacher on the table cried .
actual:   teacher ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 4 )
expected: teacher ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: A teacher beside a table danced .
actual:   teacher ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 4 )
expected: teacher ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 1 )

input: The girl in the tin fed the cake beside a clock to Liam .
actual:   * girl ( 1 ) ; * tin ( 4 ) ; * cake ( 7 ) ; clock ( 10 ) ; Liam ( 12 ) ; nmod . in ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )
expected: * girl ( 1 ) ; * tin ( 4 ) ; * cake ( 7 ) ; clock ( 10 ) ; Liam ( 12 ) ; nmod . in ( 1 , 4 ) AND feed ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . beside ( 7 , 10 )

input: The kid on a trampoline slept .
actual:   * kid ( 1 ) ; trampoline ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * kid ( 1 ) ; trampoline ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The girl beside a bed crumpled the goose in the basin .
actual:   * girl ( 1 ) ; bed ( 4 ) ; * goose ( 7 ) ; * basin ( 10 ) ; nmod . beside ( 1 , 4 ) AND crumple ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * girl ( 1 ) ; bed ( 4 ) ; * goose ( 7 ) ; * basin ( 10 ) ; nmod . beside ( 1 , 4 ) AND crumple ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The boy on the stage offered the girl a cookie .
actual:   * boy ( 1 ) ; * stage ( 4 ) ; * girl ( 7 ) ; cookie ( 9 ) ; nmod . on ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
expected: * boy ( 1 ) ; * stage ( 4 ) ; * girl ( 7 ) ; cookie ( 9 ) ; nmod . on ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )

input: A child in a car smiled .
actual:   child ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: child ( 1 ) ; car ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The girl in the tub lended Emma the cake .
actual:   * girl ( 1 ) ; * tub ( 4 ) ; Emma ( 6 ) ; * cake ( 8 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )
expected: * girl ( 1 ) ; * tub ( 4 ) ; Emma ( 6 ) ; * cake ( 8 ) ; nmod . in ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )

input: The girl on the table collapsed the rose on the trampoline .
actual:   * girl ( 1 ) ; * table ( 4 ) ; * rose ( 7 ) ; * trampoline ( 10 ) ; nmod . on ( 1 , 4 ) AND collapse ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * girl ( 1 ) ; * table ( 4 ) ; * rose ( 7 ) ; * trampoline ( 10 ) ; nmod . on ( 1 , 4 ) AND collapse ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A baby in the car offered a cake on a bible to Charlotte .
actual:   baby ( 1 ) ; * car ( 4 ) ; cake ( 7 ) ; bible ( 10 ) ; Charlotte ( 12 ) ; nmod . in ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )
expected: baby ( 1 ) ; * car ( 4 ) ; cake ( 7 ) ; bible ( 10 ) ; Charlotte ( 12 ) ; nmod . in ( 1 , 4 ) AND offer ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 12 ) AND nmod . on ( 7 , 10 )

input: A girl beside a stage cooked a cake in the shoe .
actual:   girl ( 1 ) ; stage ( 4 ) ; cake ( 7 ) ; * shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: girl ( 1 ) ; stage ( 4 ) ; cake ( 7 ) ; * shoe ( 10 ) ; nmod . beside ( 1 , 4 ) AND cook ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The lion beside a stage beside the bench gave a girl the pillow .
actual:   * lion ( 1 ) ; stage ( 4 ) ; * bench ( 7 ) ; girl ( 10 ) ; * pillow ( 12 ) ; nmod . beside ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND give ( 8 ) AND agent ( 8 , 7 ) AND recipient ( 8 , 10 ) AND theme ( 8 , 12 )
expected: * lion ( 1 ) ; stage ( 4 ) ; * bench ( 7 ) ; girl ( 10 ) ; * pillow ( 12 ) ; nmod . beside ( 1 , 4 ) AND nmod . beside ( 4 , 7 ) AND give ( 8 ) AND agent ( 8 , 1 ) AND recipient ( 8 , 10 ) AND theme ( 8 , 12 )

input: The mouse on a table gave the donut in the nest to a cat .
actual:   * mouse ( 1 ) ; table ( 4 ) ; * donut ( 7 ) ; * nest ( 10 ) ; cat ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )
expected: * mouse ( 1 ) ; table ( 4 ) ; * donut ( 7 ) ; * nest ( 10 ) ; cat ( 13 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )

input: A girl on the chair slept .
actual:   girl ( 1 ) ; * chair ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: girl ( 1 ) ; * chair ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A boy beside a broker lended Emma the melon on the plate .
actual:   boy ( 1 ) ; broker ( 4 ) ; Emma ( 6 ) ; * melon ( 8 ) ; * plate ( 11 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )
expected: boy ( 1 ) ; broker ( 4 ) ; Emma ( 6 ) ; * melon ( 8 ) ; * plate ( 11 ) ; nmod . beside ( 1 , 4 ) AND lend ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . on ( 8 , 11 )

input: A fish on a leaflet loaned the cat the donut beside the stage .
actual:   fish ( 1 ) ; leaflet ( 4 ) ; * cat ( 7 ) ; * donut ( 9 ) ; * stage ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )
expected: fish ( 1 ) ; leaflet ( 4 ) ; * cat ( 7 ) ; * donut ( 9 ) ; * stage ( 12 ) ; nmod . on ( 1 , 4 ) AND loan ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . beside ( 9 , 12 )

input: A priest on the box admired a cake on the table .
actual:   priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: priest ( 1 ) ; * box ( 4 ) ; cake ( 7 ) ; * table ( 10 ) ; nmod . on ( 1 , 4 ) AND admire ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The child beside the chair slept .
actual:   * child ( 1 ) ; * chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: * child ( 1 ) ; * chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A chicken in a car served a cat a box in the bun .
actual:   chicken ( 1 ) ; car ( 4 ) ; cat ( 7 ) ; box ( 9 ) ; * bun ( 12 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . in ( 9 , 12 )
expected: chicken ( 1 ) ; car ( 4 ) ; cat ( 7 ) ; box ( 9 ) ; * bun ( 12 ) ; nmod . in ( 1 , 4 ) AND serve ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . in ( 9 , 12 )

input: A girl on the dog handed a cat the raisin on a table .
actual:   girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )
expected: girl ( 1 ) ; * dog ( 4 ) ; cat ( 7 ) ; * raisin ( 9 ) ; table ( 12 ) ; nmod . on ( 1 , 4 ) AND hand ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )

input: The boy on a towel gave the frog the cake on a booklet .
actual:   * boy ( 1 ) ; towel ( 4 ) ; * frog ( 7 ) ; * cake ( 9 ) ; booklet ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )
expected: * boy ( 1 ) ; towel ( 4 ) ; * frog ( 7 ) ; * cake ( 9 ) ; booklet ( 12 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 ) AND nmod . on ( 9 , 12 )

input: The cat beside the stool gave a cake in a cup to a customer .
actual:   * cat ( 1 ) ; * stool ( 4 ) ; cake ( 7 ) ; cup ( 10 ) ; customer ( 13 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )
expected: * cat ( 1 ) ; * stool ( 4 ) ; cake ( 7 ) ; cup ( 10 ) ; customer ( 13 ) ; nmod . beside ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 13 ) AND nmod . in ( 7 , 10 )

input: A cow in the puddle slept .
actual:   cow ( 1 ) ; * puddle ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 4 )
expected: cow ( 1 ) ; * puddle ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A director in a house walked .
actual:   director ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 4 )
expected: director ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND walk ( 5 ) AND agent ( 5 , 1 )

input: A host beside a table smiled .
actual:   host ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 4 )
expected: host ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The cat on the hanger rented the box to a child .
actual:   * cat ( 1 ) ; * hanger ( 4 ) ; * box ( 7 ) ; child ( 10 ) ; nmod . on ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 10 )
expected: * cat ( 1 ) ; * hanger ( 4 ) ; * box ( 7 ) ; child ( 10 ) ; nmod . on ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND recipient ( 5 , 10 )

input: The boy beside a chair danced .
actual:   * boy ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 4 )
expected: * boy ( 1 ) ; chair ( 4 ) ; nmod . beside ( 1 , 4 ) AND dance ( 5 ) AND agent ( 5 , 1 )

input: The baby on the stage gave the girl a cake .
actual:   * baby ( 1 ) ; * stage ( 4 ) ; * girl ( 7 ) ; cake ( 9 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 4 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )
expected: * baby ( 1 ) ; * stage ( 4 ) ; * girl ( 7 ) ; cake ( 9 ) ; nmod . on ( 1 , 4 ) AND give ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 7 ) AND theme ( 5 , 9 )

Wu et al 2023 baseline model error NOT AS predicted (25 out of 765; 3.3%)¶

In [ ]:
len(example_agent_left_single_point_mismatch_not_nmod_substitution_all)
Out[ ]:
25
In [ ]:
for example in example_agent_left_single_point_mismatch_not_nmod_substitution_all:
  print(example)
input: A girl in a envelope sold Liam the cake beside the computer .
actual:   girl ( 1 ) ; envelope ( 4 ) ; Liam ( 6 ) ; * cake ( 8 ) ; * computer ( 11 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 6 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
expected: girl ( 1 ) ; envelope ( 4 ) ; Liam ( 6 ) ; * cake ( 8 ) ; * computer ( 11 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )

input: The mouse in the crate liked a professor on the road .
actual:   * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 7 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A donkey in the room sold Ella a donut .
actual:   donkey ( 1 ) ; * room ( 4 ) ; Ella ( 6 ) ; donut ( 8 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 6 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )
expected: donkey ( 1 ) ; * room ( 4 ) ; Ella ( 6 ) ; donut ( 8 ) ; nmod . in ( 1 , 4 ) AND sell ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 )

input: A girl in the house slept .
actual:   girl ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 ) AND
expected: girl ( 1 ) ; * house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A boy in the trailer poked the girl beside a table .
actual:   boy ( 1 ) ; * trailer ( 4 ) ; * girl ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 7 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )
expected: boy ( 1 ) ; * trailer ( 4 ) ; * girl ( 7 ) ; table ( 10 ) ; nmod . in ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . beside ( 7 , 10 )

input: A student in a pot liked the girl on a chair .
actual:   student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 7 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: student ( 1 ) ; pot ( 4 ) ; * girl ( 7 ) ; chair ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl on the surface screamed .
actual:   girl ( 1 ) ; * surface ( 4 ) ; nmod . on ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 1 ) AND
expected: girl ( 1 ) ; * surface ( 4 ) ; nmod . on ( 1 , 4 ) AND scream ( 5 ) AND agent ( 5 , 1 )

input: The girl beside a table slept .
actual:   * girl ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 ) AND
expected: * girl ( 1 ) ; table ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The frog in a house slept .
actual:   * frog ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 ) AND
expected: * frog ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The dog on the stage ate the boy on a seat .
actual:   * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 7 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * dog ( 1 ) ; * stage ( 4 ) ; * boy ( 7 ) ; seat ( 10 ) ; nmod . on ( 1 , 4 ) AND eat ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A coach on the table talked .
actual:   coach ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND talk ( 5 ) AND agent ( 5 , 1 ) AND
expected: coach ( 1 ) ; * table ( 4 ) ; nmod . on ( 1 , 4 ) AND talk ( 5 ) AND agent ( 5 , 1 )

input: The horse on a bed slept .
actual:   * horse ( 1 ) ; bed ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 ) AND
expected: * horse ( 1 ) ; bed ( 4 ) ; nmod . on ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: The prince in a bin smiled .
actual:   * prince ( 1 ) ; bin ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 ) AND
expected: * prince ( 1 ) ; bin ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The chicken on the table poked the child in a cup .
actual:   * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 7 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )
expected: * chicken ( 1 ) ; * table ( 4 ) ; * child ( 7 ) ; cup ( 10 ) ; nmod . on ( 1 , 4 ) AND poke ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . in ( 7 , 10 )

input: The frog beside a doll slept .
actual:   * frog ( 1 ) ; doll ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 ) AND
expected: * frog ( 1 ) ; doll ( 4 ) ; nmod . beside ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A boy in the haystack slept .
actual:   boy ( 1 ) ; * haystack ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 ) AND
expected: boy ( 1 ) ; * haystack ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A dog in the wardrobe smiled .
actual:   dog ( 1 ) ; * wardrobe ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 ) AND
expected: dog ( 1 ) ; * wardrobe ( 4 ) ; nmod . in ( 1 , 4 ) AND smile ( 5 ) AND agent ( 5 , 1 )

input: The girl on a table liked a journalist on a stage .
actual:   * girl ( 1 ) ; table ( 4 ) ; journalist ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 7 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * girl ( 1 ) ; table ( 4 ) ; journalist ( 7 ) ; stage ( 10 ) ; nmod . on ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: A girl in the room cried .
actual:   girl ( 1 ) ; * room ( 4 ) ; nmod . in ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 ) AND
expected: girl ( 1 ) ; * room ( 4 ) ; nmod . in ( 1 , 4 ) AND cry ( 5 ) AND agent ( 5 , 1 )

input: The mouse in the crate liked a professor on the road .
actual:   * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 7 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )
expected: * mouse ( 1 ) ; * crate ( 4 ) ; professor ( 7 ) ; * road ( 10 ) ; nmod . in ( 1 , 4 ) AND like ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 ) AND nmod . on ( 7 , 10 )

input: The girl in a house scoffed .
actual:   * girl ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 1 ) AND
expected: * girl ( 1 ) ; house ( 4 ) ; nmod . in ( 1 , 4 ) AND scoff ( 5 ) AND agent ( 5 , 1 )

input: The dog on a table snored .
actual:   * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 1 ) AND
expected: * dog ( 1 ) ; table ( 4 ) ; nmod . on ( 1 , 4 ) AND snore ( 5 ) AND agent ( 5 , 1 )

input: A cow in the puddle slept .
actual:   cow ( 1 ) ; * puddle ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 ) AND
expected: cow ( 1 ) ; * puddle ( 4 ) ; nmod . in ( 1 , 4 ) AND sleep ( 5 ) AND agent ( 5 , 1 )

input: A girl in the house forwarded Victoria a gumball in the shoe .
actual:   girl ( 1 ) ; * house ( 4 ) ; Victoria ( 6 ) ; gumball ( 8 ) ; * shoe ( 11 ) ; nmod . in ( 1 , 4 ) AND forward ( 5 ) AND agent ( 5 , 6 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . in ( 8 , 11 )
expected: girl ( 1 ) ; * house ( 4 ) ; Victoria ( 6 ) ; gumball ( 8 ) ; * shoe ( 11 ) ; nmod . in ( 1 , 4 ) AND forward ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . in ( 8 , 11 )

input: The girl beside a table rented Camila the cake beside the bed .
actual:   * girl ( 1 ) ; table ( 4 ) ; Camila ( 6 ) ; * cake ( 8 ) ; * bed ( 11 ) ; nmod . beside ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 6 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )
expected: * girl ( 1 ) ; table ( 4 ) ; Camila ( 6 ) ; * cake ( 8 ) ; * bed ( 11 ) ; nmod . beside ( 1 , 4 ) AND rent ( 5 ) AND agent ( 5 , 1 ) AND recipient ( 5 , 6 ) AND theme ( 5 , 8 ) AND nmod . beside ( 8 , 11 )

Note that about half of the errors not matching our predicted form for the obj_pp_to_subj_pp split are the Wu et al 2023 baseline Transformer adding "AND" onto the end (this does not lessen our prediction, this is a separate error mechanism and is relatively rare).

In the few other cases, a different noun is substituted.

Upcoming - testing the prediction that for pp depth 1 vs depth 2 the Wu et al 2023 baseline Transformer misassigned agent index changes predictably (for agent left of verb, should the closest pp noun on left to the verb) (obsolete, combined into above)¶

(in this simple grammar either +3*(pp depth) or 3*(pp depth)-1 if the last noun is a proper noun; nouns modified by preposition cannot be proper nouns in this grammar, but the point is that we predict it is the closest noun on the left, however these offsets are useful)

(note can be 3*(pp depth) - 1 if the trailing noun phrase in the prepositional phrase chain is a proper noun; note that proper nouns cannot be modified by prepositional phrases so only the rightmost one can; Noah beside Liam is not allowed in this grammar but a boy beside Liam is)

Testing predictions by RASP model of specific logical form errors by Wu et al 2023 baseline Encoder-Decoder Transformer - confirm pp depth 1 vs 2 changes agent misattribution to the closest pp noun as predicted (e.g. if agent 1, from 1 to 4 for depth 1, but 1 to 7 for depth 2).

It the examples I listed above it is already clearly the case that as we would expect for the mechanism of verb relationship matching being a flat fixed pattern match as we do in the RASP model, we see

e.g. for pp depth 1, as expected the mistake is to put agent index 4 instead of 1 (1 + 3*1 == 4) (no proper noun in last preposition so add 3, for "pp det common_noun", if had been proper noun would be "pp proper_noun", +2 instead):

input: The baby beside a valve painted the cake .
actual:   * baby ( 1 ) ; valve ( 4 ) ; * cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 4 ) AND theme ( 5 , 7 )
expected: * baby ( 1 ) ; valve ( 4 ) ; * cake ( 7 ) ; nmod . beside ( 1 , 4 ) AND paint ( 5 ) AND agent ( 5 , 1 ) AND theme ( 5 , 7 )

whereas e.g. for pp depth 2 on the agent left of the verb, as expected the mistake is to put agent index 7 instead of 1 (the pp noun closest to the verb steals it, not the other pp noun at index 4):

input: A girl on the stool on the table drew a frog .
actual:   girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 7 ) AND theme ( 8 , 10 )
expected: girl ( 1 ) ; * stool ( 4 ) ; * table ( 7 ) ; frog ( 10 ) ; nmod . on ( 1 , 4 ) AND nmod . on ( 4 , 7 ) AND draw ( 8 ) AND agent ( 8 , 1 ) AND theme ( 8 , 10 )

but I had not yet measured/tested that aspect in the summary metrics of the first analysis to confirm it is generally holding.

Since the error formula in the agent-left-of-verb case for this grammar where proper nouns cannot be on the left side of a nmod in this grammar:

(this is what we expect the baseline Wu et al 2023 Transformer to do by mistake)

(actual index predicted) = (expected index) + 3 * (num pp before verb) (-1 if last prepositional noun is a proper noun)

(linking new notebook to cover this case shortly, drafting content at bottom of https://colab.research.google.com/drive/1cmKPu17lp5jvatsLYSTuHu5bHeABgDE8 )

Upcoming - agent-left multiple point errors¶

We don't have a hypothesis here but we could do an exploratory analysis and report back.

In [ ]: